Odin
About
Overview
-
Open-source.
-
Created on (2016-07-07).
-
Overview .
-
FAQ .
-
Philosophy :
-
Simplicity and readability
-
Programs are about transforming data into other forms of data.
-
Data structures are just data.
-
Odin is not OOP.
-
Odin doesn't have any methods.
-
-
The entire language specification should be possible to be memorized by a mere mortal.
-
-
"The killer feature is that it has no features".
-
"bring them the 'joy of programming' back".
-
He ends the article by asking how to market the language, he doesn't know himself.
-
-
-
Paradigm :
-
Focus on the procedural paradigm.
-
GingerBill: "Odin is not a Functional Programming Language".
-
-
Roadmap :
-
FAQ:
-
There is no official roadmap. Public roadmaps are pretty much a form of marketing for the language rather than being anything useful for the development team. The development team does have internal goals, many of which are not viewable by the public, and problems are dealt with when and as necessary.
-
Odin as a language is pretty much done, but Odin the compiler, toolchain, and core library are still in development and always improved."
-
-
-
C Integration :
-
Odin was designed to facilitate integration with C code. It supports interfacing with C libraries directly and interoperability with other languages is facilitated.
-
-
Metaprogramming :
-
Odin offers some metaprogramming facilities, such as macros and templates, without becoming overly complex.
-
-
Compiler :
-
Written in C++.
-
-
Aimed systems :
-
Ginger Bill:
-
Odin has been designed specifically for modern systems: 32-bit and 64-bit platforms.
-
I highly recommend you don't use Odin, Zig or C for 8-bit chips; prefer a high-level assembly language instead.
-
-
-
Style :
-
We are not going to enforce any case style ever. You can do whatever you want.
-
Sources
-
-
Odin, RayLib.
-
Karl Zylinski on Odin and RayLib .
-
"Burnout with Unreal Engine".
-
No idea of what is happening in the engine.
-
Difficult and slow interaction.
-
C++.
-
etc.
-
-
"Odin fell into my lap and was perfect for what I wanted".
-
"Hot-reloading was the best thing I did; without it I would have gotten discouraged, because I am very impatient with iteration times".
-
Many things are in .dlls that are observed so if there are changes they reload, something like that.
-
"Don't necessarily create config files to tweak the game, but use code as data for tweaks".
-
Nah. Maybe if there is actual hot-reloading it can be okay, but a config file is very useful at times.
-
I think they are not mutually exclusive; config files are good for some things and hot-reloading for fast iterations.
-
-
-
"I made my own UIs, using Rectangles, with text inside, elegant borders, mouse hover system".
-
"I have fun and love the feeling of doing things from scratch".
-
"Why RayLib?"
-
"Reminder of how fun programming was in college when I was 22, now he's 36."
-
So, he didn't really answer the question.
-
-
Critiques of OOP, both from Karl and Wookash.
-
Nice.
-
Mentioned Mike Acton and DoD, etc.
-
-
Overall, the video is nice but doesn't talk about anything technical regarding Odin or RayLib.
-
-
-
-
Lives .
-
Odin, Zig, Haskell.
-
-
Nadako .
-
Sokol, SDL, Vulkan, all in Odin.
-
-
-
Odin, Zig.
-
Games and Apps made in Odin
My Impressions
Positives
-
Many! This has been my favorite language so far.
-
It's the most fun I had with a language!
-
Solves many problems I had with Zig, Rust, C/C++.
-
(2025-04-20) I really like the big focus on NAMES in the syntax :
-
I found the syntax very weird initially, but the reality is it is ultra intuitive and I have liked it a lot.
my_var := 123 my_proc :: proc() {}
-
-
(2025-04-20) No need for
;and don't miss it :-
I never missed
;, after all what made the experience positive are the{ }, not the;. -
You can use
;if you want, though.
-
-
(2025-04-20) No need for
( )in expressions :-
Much better than in Zig and C, nice.
-
-
(2025-04-20) Enum access is simple, similar to Swift :
-
Can be used as
.Ainstead ofMyEnum.A, in the correct context.
-
-
(2025-04-20) No need to specify return type for
voidwhen there is no return :-
Nice.
-
-
(2025-04-20) No methods :
-
Great.
-
-
(2025-04-20) Excellent built system :
-
Anything compared to C/C++ is excellent, to be fair.
-
Either way, it's by fair the easy language to compile I've seen.
-
-
(2025-04-20) The package system really seems very good, with folders :
-
Inspired by Go.
-
After seeing Ginger Bill's explanation in this video {26:50 -> 34:30} , I found it very nice.
-
Really seems to be a very good solution for managing exports/imports.
-
Negatives
-
(2025-12-12) I don't like the
contextsystem at all .-
I have lots of critiques around it.
-
See Context .
-
I had a discussion about this topic in this discord thread .
-
There's also other topics about explicitness that I'd like to go through, but I think what I wrote sums it up what I bothered me today.
-
-
(2025-12-13) I don't like
@(init)and@(fini).-
Quoting a snippet from the discord thread about that I found interesting and agree with:
-
Barinzaya:
-
I think
@(init)procs are kind of an anti-pattern. I dislike the "this proc is now always going to be called whether you want it or not" nature of it.
-
-
Caio:
-
I completely agree. The only reason I used
@(init)in this situation, was because other libraries do. I had to place the profiler earlier than all of them, so the only way to do it is by also being@(init)or going before than_startup_runtime.
-
-
-
(2025-11-13) I don't like how
@(require_results)is NOT the default way of handling results; I would prefer the opposite .-
By default, errors can be ignored. Not good. Things like
#optional_okand#optional_allocator_errorexists, but the main problem is actually how@(require_results)is optional. By the default a procedure will not require the results to be handled. I wish the opposite was true: you have to opt out of required results and use something like@(optional_results); the priorities should have been inverted. -
There's also the annoyance of having to add
@(require_results)for every math function and similar, etc. -
I made a suggestion for something like
#+vet explicit-returns, as way to have every unhandled return be treated as an error, even for#optional_okor#optional_allocator_error, as well as a compiler flag. This would just be an optional flag, per file (even tho I prefer per library), but it was denied :/
-
-
(2025-11-13) I don't like the implicit usage of
context.allocatoraround A LOT of libraries, basically being the standard in Odin .-
This has led me to more bugs that it has helped anything.
-
Also, this leads to code that focuses heavily on "constructors" and "destructors", as by default the
context.allocatoris aruntime.heap_allocator(), which is just a wrap aroundmalloc.-
Some libraries are ok with you using an arena in its place, but other libraries use
defer delete()implicitly and that makes it incompatible with a more straight forward and optimized design of managing memory, focused on lifetimes with arenas.
-
-
Currently, to improve this:
-
I use
panic_allocatoras the default forcontext.allocator, by using the-default-to-panic-allocatorflag. I don't ever reassign thecontext.allocator. I use it as thepanic_allocatorduring all the application, so if I forget to be explicit about an allocation, the app crashes. This is far from perfect, as this is a runtime check, but it's better than losing track of your memory. -
I use
#+vet explicit-allocatorson top of every file. This make it soallocator := context.allocatorgives an error. Sonew(int)will give an error, butnew(int, allocator)will not. Also not perfect, as I'd prefer this to be a compilation flag, etc.
-
-
Both improvements above just hides the problem a bit. I don't like how I had to go around one of the main language design just so I have safer and sane code.
-
Even if
contextwas removed, code could go fromallocator := context.allocatortoallocator := runtime.allocator(thread local global variable, as I suggested). So it's not much of acontextthing, but more about how the language heavily favors design of default allocators, and implicitness. -
I can think of this either being solved by removing default parameters in procedures, or having a code style that enforces explicit allocators instead of the opposite.
-
-
(2025-11-13) I would prefer if there was no default parameters in procedures .
-
This sounds a bit wild, but I came to realize how little I actually need default parameters.
-
They result in implicit behavior, which I believe leads to worse code.
-
Meanwhile, working without default parameters is actually an interesting challenge to solve that I think results in much better APIs.
-
-
(2025-04-20) I don't like
usingoutside of structs :-
Ginger Bill also considers this a mistake.
-
Read the
usingsection for more information.
-
-
(2025-04-20)
Lack of keywords for concurrency is somewhat annoying (async/await) .-
Maybe this is ok, but I do have to investigate a bit more about this.
-
(2025-11-13)
-
Well, I made a library for that, so problem solved. I much rather having my library then using something built in the language now, I think.
-
-
-
(2025-04-20)
Down-casting can be complex :-
I cannot compare subtypes, like in GDScript, with
is:
func _detect_hitbox(area: Area2D) -> void: if not (area is Hitbox): Debug.red('(%s | Hurtbox) The area is not a Hitbox.' % _name) return-
It is necessary to use advanced idioms, with Unions / Enums, etc., to get the desired information.
-
See Odin#Advanced Idioms, Down-Cast and Up-Cast for more information.
-
(2025-11-13)
-
I think I'm ok with this. It's actually really rare I have to use something like the code shown in GDScript, and avoiding these situations led the code to be more understandable.
-
It's a lower level thing, but once you get used to it, I think it's ok.
-
-
-
(2025-04-20)
Having to use:->for function return-
Minor, but I feel it could be hidden.
-
(2025-07-03)
-
Genuinely, I don't care at all.
-
-
-
(2025-04-20)
Having to use:casekeyword for switches-
Minor, but I think the keyword shouldn't exist.
-
(2025-07-03)
-
Genuinely, I don't care at all.
-
I actually kind of like it.
-
-
-
(2025-04-20)
Having to use:prockeyword for procedures-
Ultra minor, I got used to the keyword and it's convenient when considering how similar the syntax is to:
my_proc :: proc() {} my_struct :: struct {} -
(2025-07-03)
-
JAI opts not to use the keyword, but I have come to appreciate its use.
-
-
Installation
Versions used
-
(2025-12-05)
-
Odin: I'm using
dev-2025-12(2025-12-04).cd C:\odin .\build.bat release -
OLS: I'm using
787544c1(2025-12-03).cd C:\odin-ols .\build.bat-
Remember to stop all OLS executions in VSCode, or just close VSCode.
-
Building from source
-
Repo .
-
x64 Native Tools Command Prompt for VS 2022-
Search for this terminal in the Windows search bar.
-
-
cd c:\odin -
build.bat-
or
build.bat releasefor a faster compiler (the command takes longer).
-
-
build_vendor.bat. -
Considerations :
-
Apps running Odin must be closed.
-
VSCode can stay open, but .exe compiled with Odin must be closed.
-
-
Building
Build
-
Compiles, generates executable.
odin build .
Run
-
Compile, generate executable, run executable.
odin run . -
.refers to the directory. -
Odin thinks in terms of directory-based packages. The
odin build <dir>command takes all the files in the directory<dir>, compiles them into a package and then turns that into an executable. You can also tell it to treat a single file as a complete package by adding-file, like so:odin run hellope.odin -file
Help
-
odin build -help -
Output path:
-
odin build . -out:foo.exe -
odin build . -out:out/odin-engine.exe-
The directory is not created by default, so if the
outdir doesn't exist it will give an error in the build; usemkdirbeforehand.
-
-
Subsystems
Remove terminal from executable
-
For Windows:
-
-subsystem:windows.
-
Compile-time Stuff
Compile-time Flags
-
Check
base:builtin/builtin.odin.
When
-
when. -
Certain compile-time expressions.
-
The
whenstatement is almost identical to theifstatement but with some differences:-
Each condition must be a constant expression as a
whenstatement is evaluated at compile time . -
The statements within a branch do not create a new scope.
-
The compiler checks the semantics and code only for statements that belong to the first condition that is
true. -
An initial statement is not allowed in a
whenstatement. -
whenstatements are allowed at file scope.
-
when ODIN_ARCH == .i386 {
fmt.println("32 bit")
} else when ODIN_ARCH == .amd64 {
fmt.println("64 bit")
} else {
fmt.println("Unsupported architecture")
}
#config
In Code
-
TRACY_IS_ENABLED :: #config(TRACY_ENABLE, false)-
The name on the left is for code use. The name on the right is for the compiler.
-
They can be the same, it doesn't matter.
-
Compilation Flag
odin run . -define:TRACY_ENABLE=true
-
Caio:
-
If a lib defines
OPTION :: #config(OPTION, false), is it possible for me to enable it in my app, without using compiler flags? If I redefine it in my app asOPTION:: #config(OPTION, true), it doesn't work.
-
-
Oskar:
-
Only compiler flag.
-
Procedure Disabled
@(disabled=CONDITION)
-
Disabled the procedure at compile-time if the condition is met.
-
The procedure will not be used when called.
-
The procedure cannot have a return value.
-
The procedures using this are still type-checked.
-
This differs from Zig. Odin tries to check as much as possible.
-
-
Modify the compilation details or behavior of declarations.
Static / Read-Only
-
@(static)
Read-only
-
@(rodata)
Comp-time Loop
-
Barinzaya:
-
There's no compile-time loop, though. I seem to recall Bill saying something about not wanting to add it, IIRC because it's a bit of a slippery slope (e.g. then people will want to be able to iterate over struct fields). I can't find the message I'm thinking of, though.
-
-
Sobex:
-
Since you unroll you can kinda do a unrolled loop with inlined recursion
-
comp_loop :: #force_inline proc(as: []int, $i: int, $end: int) {
_, _ = args[i].(intrinsics.type_proc_parameter_type(F, i))
a := as[i]
fmt.print(a)
when i + 1 != end do comp_loop(as, i+1, end)
}
as := [?]int{5, 4, 3, 2, 1}
comp_loop(as[:], 0, 5)
```
Build Tags
-
Used to define build platforms.
-
It is recommended to use File Suffixes anyway.
-
This has a function, not just decorative.
-
"For example,
foobar_windows.odinwould only be compiled on Windows,foobar_linux.odinonly on Linux, andfoobar_windows_amd64.odinonly on Windows AMD64."
-
Ignore
#+build ignore
Optimizations
Force Inline (
#force_inline
)
-
Doesn't work on
-o:none.
Intrinsics
-
intrinsics.type_is_integer()-
Caio:
proc (a: [$T]any) where intrinsics.type_is_integer(T) Expected a type for 'type_is_integer', got 'T' intrinsics.type_is_integer(T) -
Blob:
-
Because the type of
Tis an untyped integer, as it's technically a constant, & there's no way to check against an untyped type (I would like there to be honestly). What you'd want to check against is the type of the array itselfproc (a: $E/[$T]any) where intrinsics.type_is_array(E).
-
-
-
intrinsics.type_elem_type()-
Underlying type.
-
Useful for arrays.
-
Custom Attributes
-custom-attribute:<string>
Add a custom attribute which will be ignored if it is unknown.
This can be used with metaprogramming tools.
Examples:
-custom-attribute:my_tag
-custom-attribute:my_tag,the_other_thing
-custom-attribute:my_tag -custom-attribute:the_other_thing
-
If you don't use this flag for a custom attribute, there will be a compiler error.
-
I imagine this is best used when in conjunction with
core:odin/parser, or something like it?
Example
@(my_custom_attribute)
My_Struct :: struct {
}
Package System
What is a Package
-
"A Package is basically a folder with Odin code in it, where everything inside that folder becomes part of that package".
-
"The Package system is made for library creation, not necessarily to organize code within a library".
-
-
Examples show how using different packages within the same game can create friction, as:
-
You have to be careful with cyclic dependencies.
-
You have to use imported library name prefixes everywhere.
-
-
-
Everything is accessible within the same Package.
-
"The only reason to separate code into different files within the same package is for code organization; in practice it's as if everything were together".
-
In the given example, all files can communicate with each other without "include", since all belong to the same package and will be compiled into a single "thing".
-
.
-
"I can simply cut the code and paste it in another file and everything will still work the same".
-
Basic Usage
Creating a package
-
"Packages cannot have cyclic references".
-
"game -> ren".
-
"ren -!> game".
-
-
Keyword
package:-
All files must have a
packageat the top.-
The name does not need to be the same as the folder the file is in, it can be anything.
-
-
All files within the same package must have the same name at the top.
-
If not, it gives a compiler error.
-
-
Which name to use :
-
If you are making a game, the
packagename is not very important. -
But if you are making something intended for use by others, then choose a good and unique name among existing package names.
-
-
Installing a new package
-
Download the folder, put it there, it works.
Using a package
-
From a collection :
import rl "vendor:RayLib" -
From the file system :
-
If no prefix is present, the import will look relative to the current file.
import ren "renderer" // Uses the "renderer" package (folder).import cmn "../common" // goes to the parent package (parent folder) and gets the "common" package (folder). -
Collections
-
Odin has the concept of
collectionsthat are predefined paths that can be used in imports. -
core:The most common collection that contains useful libraries from Odin core likefmtorstrings.
Standard Collections
-
Base :
-
Core :
-
Useful and important things, but not fundamental.
-
-
Vendor :
-
3rd party, but included with Odin.
-
"High-quality, officially supported".
-
-
Joren:
-
When the spec is written (around v1's release), it will be clarified that there are 3 standard "collections":
-
base: defined by the language specification, expected to work the same no matter the compiler vendor,
-
core: would be nice if it mirrors upstream Odin's packages for interoperability, but up to the compiler vendor,
-
vendor: things like RayLib, DirectX, entirely up to the compiler vendor what's shipped here,
-
-
You can still opt to fork Odin and tweak things in
base, but at that point you have your own dialect of the language that can no longer necessarily be compiler by another Odin implementation, even if you copy acrosscore.
-
Shared Collection
-
There's a
sharedfolder in the Odin installation folder that you can use for that. It's available as a collection by default (e.g.import "shared:some_package")
Creating new Collections
-
You can define your own collection at build time .
-
You can specify your own collections by including
-collection:name=some/pathwhen running the compiler. -
There's no built-in way to make it "permanent" though.
-
The following will define the collection
projectand put the path at the current directory. -
In the project :
import "my_collection:package_a/package_b"
-
While building :
odin run . -collection:my_collection=<path to "my_collection" folder>
-
If you are using
my_collectionin code, but you forget to specify the build flag, then the project will simply not compile, as Odin doesn't know where "my_collection" is.
Declaration Access
-
All declarations in a package are public by default.
-
@(private="package")/@(private)-
The declaration is private to this package.
-
Using
#+privatebefore the package declaration will automatically add@(private)to everything in that file.
-
-
@(private="file")-
The declaration is private to this file.
-
#+private fileis equivalent to automatically adding@(private="file")to each declaration.
-
LSP (OLS - Odin Language Server)
-
OLS .
-
I downloaded OLS, used
build.batandodinfmt.bat. -
Stored the entire OLS folder in a directory.
-
Installed the VSCode extension.
-
Set the path of
ols.exein Odin settings inside VSCode. -
Created the
ols.jsonfile in my project directory in VSCode, with configs from the OLS GitHub.
Check Args
-
odin check -help
Examples
-
Rickard Andersson's OLS
-
.
Operations
Arithmetic Operations
%
-
Modulo (truncated).
-
%is dividend
%%
-
Remainder (floored).
-
%%is divisor. -
For unsigned integers,
%and%%are identical, but the difference comes when using signed integers.
Logical Operations
"Short-Circuit"
-
It means that if the first condition is
falsethen the second condition won't be evaluated. -
This works for any control flow, as the "short-circuiting" is a property of the logical operators (
&&,||), not the control flow.-
So this is also applicable to ternary operations, for example.
-
-
if a != nil && a.something == true {}-
This is safe, as when the first condition is
false, the second one will not be evaluated.
-
-
if a.something == true && a != nil {}-
This is unsafe. The first condition will be evaluated first, so if
a == nil, this will crash.
-
conditional AND (
&&
)
a && b is "b if a else false"
conditional OR (
||
)
a || b is "true if a else b"
Bitwise Operations
OR (
|
)
-
.
XOR (
~
)
-
~u32(0)is effectivelymax(u32).
AND (
&
)
-
.
AND-NOT (
&~
)
-
.
LEFT SHIFT (
<<
)
-
.
RIGHT SHIFT (
>>
)
-
.
Control Flow (if, when, switch, for, defer)
If
-
If .
if x >= 0 {
fmt.println("x is positive")
}
-
Initial statement :
-
Like
for, theifstatement can start with an initial statement to execute before the condition. -
Variables declared by the initial statement are only in the scope of that
ifstatement, including theelseblocks.
if x := foo(); x < 0 { fmt.println("x is negative") }if x := foo(); x < 0 { fmt.println("x is negative") } else if x == 0 { fmt.println("x is zero") } else { fmt.println("x is positive") } -
If Ternary
bar := 1 if condition else 42
// or
bar := condition ? 1 : 42
For
-
For .
-
It's the only type of loop.
-
Braces
{ }or adoare always required.
for i := 0; i < 10; i++ {
fmt.println(i);
}
for i := 0; i < 10; i += 1 { }
for i := 0; i < 10; i += 1 do single_statement()
for i in 0..<10 {
fmt.println(i)
}
// or
for i in 0..=9 {
fmt.println(i)
}
str: string = "Some text"
for character in str {
assert(type_of(character) == rune)
fmt.println(character)
}
memory_block_found := false
for block := arena.curr_block; block != nil; block = block.prev {
if block == temp.block {
memory_block_found = true
break
}
}
Switch
-
switchis runtime. The compiler doesn't know if those cases are actually reachable or not, so it needs to check them all.-
The switch evaluates the possibility of entering each case, so the operation inside each case must be compatible.
-
-
The Switch has no fallthrough, but requires the use of the
casekeyword.
switch arch := ODIN_ARCH; arch {
case .i386, .wasm32, .arm32:
fmt.println("32 bit")
case .amd64, .wasm64p32, .arm64, .riscv64:
fmt.println("64 bit")
case .Unknown:
fmt.println("Unknown architecture")
}
Partial
Foo :: enum {
A,
B,
C,
D,
}
f := Foo.A
switch f {
case .A: fmt.println("A")
case .B: fmt.println("B")
case .C: fmt.println("C")
case .D: fmt.println("D")
case: fmt.println("?")
}
#partial switch f {
case .A: fmt.println("A")
case .D: fmt.println("D")
}
Type switch
-
vis the unwrapped value fromvalue.
value: Value = ...
switch v in value {
case string:
#assert(type_of(v) == string)
case bool:
#assert(type_of(v) == bool)
case i32, f32:
// This case allows for multiple types, therefore we cannot know which type to use
// `v` remains the original union value
#assert(type_of(v) == Value)
case:
// Default case
// In this case, it is `nil`
}
-
Note :
-
Having multiple types in a single case will mean it won't be unwrapped, as there's no one type the complier can guarantee it'll be.
-
Defer
-
Defer .
-
A defer statement defers the execution of a statement until the end of the scope it is in.
-
The following will print
4then234.
package main
import "core:fmt"
main :: proc() {
x := 123
defer fmt.println(x)
{
defer x = 4
x = 2
}
fmt.println(x)
x = 234
}
Procedures
-
Procedure used to be the common term as opposed to a function or subroutine. A function is a mathematical entity that has no side effects. A subroutine is something that has side effects but does not return anything.
-
A procedure is a superset of functions and subroutines. A procedure may or may not return something. A procedure may or may not have side effects.
multiply :: proc(x: int, y: int) -> int {
return x * y
}
fmt.println(multiply(137, 432))
multiply :: proc(x, y: int) -> int {
return x * y
}
fmt.println(multiply(137, 432))
-
Everything in Odin is passed by value, rather than by reference.
-
All procedure parameters in Odin are immutable values.
-
Passing a pointer value makes a copy of the pointer, not the data it points to.
-
Slices, dynamic arrays, and maps behave like pointers in this case (Internally they are structures that contain values, which include pointers, and the “structure” is passed by value).
Calling Conventions
-
Procedure types are only compatible with the procedures that have the same calling convention and parameter types.
odin
-
By default, Odin procedures use the
"odin"calling convention. -
This calling convention is the same as C, however it differs in a couple of ways:
-
It promotes values to a pointer if that’s more efficient on the target system
-
Where would this be more efficient?
-
It passes all parameters larger than
16 bytesby reference. -
The promotion is enabled by the fact that all parameters are immutable in Odin, and its rules are consistent for a given type and platform and can be relied on since they are part of the calling convention.
-
Passing a pointer value makes a copy of the pointer, not the data it points to. Slices, dynamic arrays, and maps have no special considerations here; they are normal structures with pointer fields, and are passed as such. Their elements will not be copied.
-
Note: This is subject to change.
-
-
It includes a pointer to the current context as an implicit additional argument .
-
contextless
-
Same as
odinbut without the implicitcontextpointer.
stdcall / std
-
This is the
stdcallconvention as specified by Microsoft.
c / cdecl
-
This is the default calling convention generated of a procedure in C.
-
If it's within a
foreignblock, the default calling conventions iscdecl.
fastcall / fast
-
This is a compiler dependent calling convention.
none
-
This is a compiler dependent calling convention which will do nothing to parameters.
Variadic Arguments
-
Ginger Bill: "It's just a slice allocated on the stack."
foo :: proc(x: ..int) {} // Calling foo(1, 2, 3) // is the same as temp_array := [3]int{1, 2, 3} temp_slice := temp_array[:] foo(..temp_slice) -
Procedures can be variadic, taking a varying number of arguments:
sum :: proc(nums: ..int) -> (result: int) {
for n in nums {
result += n
}
return
}
fmt.println(sum()) // 0
fmt.println(sum(1, 2)) // 3
fmt.println(sum(1, 2, 3, 4, 5)) // 15
odds := []int{1, 3, 5}
fmt.println(sum(..odds)) // 9, passing a slice as varargs
Multiple returns
swap :: proc(x, y: int) -> (int, int) {
return y, x
}
a, b := swap(1, 2)
fmt.println(a, b) // 2 1
-
Implicitly :
end_msg_as_bytes, err_end := cbor.marshal_into_bytes(end_msg) -
Explicitly :
end_msg_as_bytes: []byte err_end: cbor.MarshalError end_msg_as_bytes, err_end = cbor.marshal_into_bytes(end_msg) // or packet_as_bytes: []byte; err_packet: cbor.Marshal_Error
packet_as_bytes, err_packet = cbor.marshal_into_bytes(packet[:])
```
Closures (They don't exist)
-
Does not have closures, only Lambdas.
-
Odin only has non-capturing lambda procedures.
-
For closures to work correctly would require a form of automatic memory management which will never be implemented into Odin.
foo :: proc() {
y: int
x := proc() -> int {
// `y` is not available in this scope as it is in a different stack frame
return 123
}
}
Procedure Groups (explicit overload)
-
Caio:
-
if I have a struct that inherits another struct with
using, and then I make a procedure group, where the first procedure accepts the original struct, and the second accepts the struct that inherits the first struct, what would happen? This "higher level" struct would call which of these procedures? Does it depend on the order the procedures are stored in the procedure group, or something like that? Casting has been the weirdest thing for me.
-
-
Barinzaya:
-
The order of the procs in the proc group isn't used to decide which to call, the compiler "scores" each candidate to decide which one is the best fit for a given call. As best I can tell it does appear that the compiler accounts for subtypes when doing this, so it should consistently call the proc closest to the base type https://github.com/odin-lang/Odin/blob/090cac62f9cc30f759cba086298b4bdb8c7c62b3/src/check_expr.cpp#L829.
-
-
Odin:
-
In retrospect it sounds a bit weird that odin checks for subtyping in cases of proc groups, but it can't be done directly. In a way, overloading itself sounds weird with no RTTI. Is it just because of the c++ part of odin? We were talking about options for downcasting, but maybe a proc group could also be an option while not having to store any extra data in the struct? I have no idea, it just sounds odd going back to proc groups after the limitations we were talking about. I wonder what would be cheaper, letting a proc group handle the polymorphism, or using a union subtype polymorphism as discussed
-
-
Jesse:
-
Nothing to do with the language choice for the compiler.
-
It's a compile time switch basically. A better designed
_Genericmacro from C. -
They act on type information available at compile time. There's nothing runtime about proc groups.
-
Generics
-
Use of
$Tin parameter type of the procedure. -
Fun facts :
-
Parapoly doesn't support default values.
-
[]$MEMBERcan't have a default value, for example.
-
-
-
Specialization :
array: $T/[dynamic]$E-
T:-
Type of the entire array.
-
-
E:-
Type of the element inside the array.
-
-
Force parameters to be compile-time constants
-
Use of
$Tin parameter name of the procedure.
my_new :: proc($T: typeid) -> ^T {
return (^T)(alloc(size_of(T), align_of(T)))
}
ptr := my_new(int)
Deferred
-
@(deferred_in=<proc>)-
will receive the same parameters as the called proc
-
-
@(deferred_out=<proc>)-
will receive the result of the called proc.
-
-
@(deferred_in_out=<proc>)-
will receive both
-
-
@(deferred_none=<proc>)-
will receive no parameters.
-
Return from a deferred procedure
-
what happens if I have a
@(deferred_none=end) begin :: proc() -> booland aend :: proc() -> bool, and I callresult := begin()? How does the return of deferred procedures work? Wouldresulthold the value ofbeginor something else?-
resultwill hold the return value frombegin, the return value ofendwill be silently dropped when it runs -
It'd be equivalent to
result := begin() defer end() -
Typing
Declaration
Constants
u :: "what";
// Untyped.
y : int : 123
// Explicitly typed constant.
-
::is closer to#definethan it isstatic const. -
To achieve similar behaviour to C’s
static const, apply the@(rodata)attribute to a variable declaration (:=) to state that the data must live in the read-only data section of the executable. -
"Anything declared with
::behaves like a constant. That includes types and procs." -
Aliases :
Vector3 :: [3]f32
Variables
x: int
// default to 0
// All below are equivalent.
x : int = 123
x : = 123
x := 123
x := int(123)
-
Multi-declaration :
y, z: int
// both are int.
Literal Types
-
Literals are
untyped, butuntypedvalues doesn't have to be from a literal; you can getuntypedvalues from builtins likelenwhen applicable. -
"I might say that a literal rune is a piece of syntax that yields an untyped rune".
-
untypedusually means it comes from a literal, though sometimes intrinsics/builtins can give them too. -
It basically just means a compile-time-known value.
-
rgats:
-
i can see why some people prefer literals having static types,
10is always an int in C -
and the conversions happen at runtime
-
but i dont think it makes a very big difference in most cases
-
honestly i think it'd make a bigger difference in a language without type inference
-
in C you have to specify the type of your literal,
10,10u,10f,10l, etc, and you also have to specify the type of your variable, likeunsigned long long x = 10ull; -
c implicitly converts
inttounsigned long longi believe, but if you actually wanted a very large number you'd need to specify the type (edited)Monday, 27 October 2025 15:31 -
so it gets extra messy there
-
and not every number converts implicitly, i dont think
float x = 10.5;works for example, which gets annoying
-
Untyped Types
-
Can be assigned to constants (
::) without being forced into a specific type, but once it gets assigned to a variable (=) it has to have an actual type.
A_CONSTANT :: 'x'
// is an untyped thing you can make yourself
Zero Value
-
Variables declared without an explicit initial value are given their zero value.
-
The zero value is:
-
0for numeric and rune types -
falsefor boolean types -
""(the empty string) for strings -
nilfor pointer, typeid, and any types.
-
-
The expression
{}can be used for all types to act as a zero type.-
This is not recommended as it is not clear and if a type has a specific zero value shown above, please prefer that.
-
Broadcasting
Directive
-
#no_broadcast
Example
-
Caio:
-
I have this procedure:
tween_create :: proc( value: ^$T, #no_broadcast end: T, duration_s: f64, ease: ease.Ease = .Linear, start_delay_s: f64 = 0, custom_data: rawptr = nil, on_start: proc(tween: ^Tween) = nil, on_update: proc(tween: ^Tween) = nil, on_end: proc(tween: ^Tween) = nil, loc := #caller_location ) -> (handle: Tween_Handle) { //etc }-
And I call it with:
tween_create( value = &personagem_user.arm1.pos_world, end = arm_relative_target_trans.pos, duration_s = 0.1, on_end = proc(tween: ^eng.Tween) { personagem_user.arm1.is_stepping = false }, )-
So why don't I get a compile error, considering that
valueis a[2]f32andendis af32?
-
-
Thag and Blob:
-
Because
f32can broadcast to[2]f32
my_arr: [2]f32 my_arr = 3.0 fmt.println(my_arr) // [2]f32{3.0, 3.0}-
it's really useful in certain cases
-
like allowing you to do:
my_vec *= 2-
you can add
#no_broadcast paramto procs params to stop it doing so. -
in front of the param
#no_broadcast end: T-
you can add it both to
valueandendif you want.
-
Casting
-
All the syntaxes below produce the exact same result.
-
Those are semantic casts. It's a compiler-known conversion between two types in a way that semantically makes sense.
-
A straightforward example would be converting between
intandf64; the conversion will have the same numerical value, which will change its representation in memory.
i := 123
f := f64(i)
u := u32(f)
i := 123
f := (f64)(i)
u := (u32)(f)
i := 123
f := cast(f64)i
u := cast(u32)f
~Auto Cast Operator
-
The
auto_castoperator automatically casts an expression to the destination’s type if possible. -
This operation is only recommended for prototyping and quick tests. Do not overuse it.
x: f32 = 123
y: int = auto_cast x
Advanced Idioms, Down-Cast and Up-Cast
-
Subtyping in procedure overload :
-
Area to Hurtbox and Hurtbox to Area :
-
Very useful.
-
Caio:
-
Consider an
Areaand aHurtboxtype, whereHurtboxinherits fromArea(using area: Area).
obj := Area{ area_entered = some_func_pointer, area_exited = some_func_pointer, } fmt.printfln("OPERATION 1: %v", cast(Hurtbox)obj) fmt.printfln("OPERATION 2: %v", cast(^Hurtbox)&obj)-
The Operation 1 is not allowed, and the Operation 2 causes a Stack-Buffer-Overflow. My question is: how / why does this happen, for both operations?
-
-
Barinzaya:
-
A
Hurtboxis anAreaplus more (theAreais just part of theHurtbox). When you assignobjto be anArea, it is only the contents of anArea, there's no extra space reserved for the extra things that aHurtboxwould also contain. -
Subtyping can easily downcast (
HurtboxtoArea) because everyHurtboxcontains a completeArea, but upcasting (AreatoHurtbox) only works on an^Areathat points into a completeHurtbox.-
NOTE : You can only cast if it's also the first field, otherwise you'd need to use
container_of. -
When you make a variable of type
Area, it isn't part of a completeHurtbox
-
-
Odin doesn't implicitly embed any RTTI (Runtime Type Information) in the type, so you can't definitively tell whether a given
Areais part of aHurtboxor not, so there is nodynamic_cast/type-aware pointer casting. -
That's where patterns like
union-based subtype polymorphism come into play--that's an approach to adding that extra information for you to know what type it is.-
Though it stores a self-pointer, so it can cause issues if you later copy the struct without updating it.
-
-
-
Caio:
-
Isn't there a way to do something like gdscript does:
if not (area is Hitbox): return, for example? -
I mean, can I check for something like the length of the object inside the pointer, to see if the length corresponds to a complete Area or something more? I'm not sure if my question makes sense, as I don't know if checking for the content of the ^Area would give me something besides what an Area has
-
-
Barinzaya:
-
That would require Odin to implicitly add extra info into the
struct. It doesn't do that. -
And as for the length: That info isn't in the type. If you're talking like
size_of(ptr^)or something, the compiler is just going to give you that info based on what it knows based on the types. It doesn't do any kind of run-time lookup to try to figure it out. -
"as I don't know if checking for the content of the ^Area would give me something besides what an Area has". That's exactly what I'm saying--there is no other info there other than what you put in the
struct. There's nothing to check, unless you put it there yourself. -
Subtyping is syntax sugar, and nothing more.
-
-
Caio:
-
So my only options are:
-
Place some more info in the struct to avoid casting blindly
-
Yolo cast blindly, but only do the casting if you are sure it's safe (like I'm doing for the function pointers inside the structs).
-
-
-
Barinzaya:
-
Basically, yes.
-
Number 1 is what OOP languages do, they just do it implicitly. Odin doesn't do that.
-
More specifically: that info has to come from somewhere . If all you have is an
^Area, then it has to come from inside of thestruct, but it could also come from something associated with the pointer. -
A
unionof pointers or anany, they store both a pointer and a tag/typeidrespectively that they use to know what the pointer actually points it.-
He means in the sense of not receiving
^Arenadirectly, but anunionoranyin its place
-
-
-
Transmute
-
It is a bitcast; that is, it reinterprets the memory for a variable without changing its actual bytes.
-
Using the same example as above,
transmuteing frominttof64will keep the same representation in memory, which means the numerical value will be different. -
This can be useful for bit-twiddling things in floats, for instance;
core:mathdoes that for some of its procs.
f: f32 = 123
u := transmute(u32)f
Type Conversions
From
int
to
[8]byte
-
transmute([8]byte)i -
A fixed array is its data, so transmuting will give you the actual bytes of the
int. -
You may also want to consider casting to one of the endian-specific integer types first if you care about the bytes being the same on big-endian systems.
From
[]int
to
[]byte
-
[]intss a slice, buttransmuteing tou8won't change the length; a slice of 4ints wouldtransmuteinto a slice of 4u8s. -
You probably want to use
slice.to_bytes(or more generically,slice.reinterpret). That will give you au8slice with the correct size. -
The same note about endianness applies here, but it's not as straightforward to convert between the two.
From
[]T
to
[]byte
-
transmute([]byte)my_slice-
Doesn't work well.
-
"It will literally reinterpret the slice itself as a byte slice; you have to use something in
core:sliceorencoding".
-
From
string
to
cstring
-
strings.unsafe_string_to_cstring(st)-
Action : Alias.
-
The internal operation is:
raw_string := transmute(mem.Raw_String)s cs := cstring(raw_string.data)
-
-
strings.clone_to_cstring(s)-
Action : Copy.
-
From
string
to
rune
-
for in-
Assumes the string is encoded as UTF-8.
s := "important words" for r in s { // r is type `rune`. // works equally for any UTF-8 char; e.g., Japanese, etc. }-
Action : Stream
-
From
string
to
[]rune
-
utf8.string_to_runes(st)-
Action : Copy
-
From
string
to
byte
last_character := s[len(s) - 1]
// This is a `byte` / `u8`
// string length is in bytes
for idx in 0..<len(s) {
fmt.println(idx, s[idx])
// 0 65
// 1 66
// 2 67
}
From
string
to
[]byte
-
transmute([]byte)s-
Action : Alias.
-
Is functionally a
[]bytewith different semantics, so you can transmute to it. -
This works because their in-memory layout is the same; see
runtime.Raw_Sliceandruntime.Raw_String. -
Does not work for
untyped string.-
The type needs to be explicit.
// Does not work msg :: "hello" data := transmute([]u8)msg // Works msg: string : "hello" data := transmute([]u8)msg -
-
From
string
to
[^]byte
-
raw_data(s)-
Action : Alias.
-
From
[]string
to
[]byte
-
It's effectively a pointer to pointers.
-
If you want the bytes of each string sequentially, you will have to loop through them and copy them into a buffer.
From
cstring
to
string
-
string(cs)-
Action : Alias.
-
-
strings.clone_from_cstring(cs)-
Action : Copy.
-
From
cstring
to
rune
-
.
From
cstring
to
[]rune
-
.
From
cstring
to
byte
-
.
From
cstring
to
[]byte
-
.
From
cstring
to
[^]byte
-
transmute([^]byte)cs-
Action : Alias.
-
From
[]byte
to
string
-
string(bs)-
Unless it's a slice literal
-
Action : Alias.
-
-
transmute(string)bs-
Action : Alias.
-
From
[]byte
to
cstring
-
.
From
[]byte
to
rune
-
.
From
[]byte
to
[]rune
-
.
From
[]byte
to
[^]byte
-
raw_data(bs)
From
byte
to
string
last_character_as_byte := my_str[len(my_str) - 1]
string([]byte{ last_character_as_byte })
From
byte
to
cstring
-
.
From
byte
to
rune
-
.
From
rune
to
string
-
With a
strings.Builder:-
strings.write_rune
-
bytes, length := utf8.encode_rune(r)
string(bytes[:length])
-
utf8.encode_rune+ slice using theintreturned, to perform astring()cast. -
No allocation is needed.
From
rune
to
[]byte
-
utf8.encode_rune-
Takes a
runeand gives you a[4]u8, intwhich you can slice and string cast.
-
From
[]rune
to
string
-
-
"C Byte Slice".
-
Action : Copy.
-
From
[^]byte
+ length to
string
-
strings.string_from_ptr(ptr, length)-
Action : Alias.
-
From
[^]byte
to
cstring
-
cstring(ptr)-
"C Byte Slice".
-
Action : Alias.
-
From
struct
to
[^]byte
-
cast([^]u8)&my_struct
From
struct
to
[]byte
-
(cast([^]u8)&my_struct)[:size_of(my_struct)] -
mem.ptr_to_bytes(ptr, len)-
Creates a byte slice pointing to
lenobjects, starting from the address specified byptr. -
It just does
transmute([]byte)Raw_Slice{ptr, len*size_of(T)}internally.
-
type / typeid / size_of
Type
-
-
Strange.
-
-
Get the type of a variable :
typeid_of(type_of(parse)) -
Places using
exprortype:-
base:builtintype_of :: proc(x: expr) -> type --- -
base:intrinsics:soa_struct :: proc($N: int, $T: typeid) -> type/#soa[N]T type_base_type :: proc($T: typeid) -> type --- type_core_type :: proc($T: typeid) -> type --- type_elem_type :: proc($T: typeid) -> type --- type_integer_to_unsigned :: proc($T: typeid) -> type where type_is_integer(T), !type_is_unsigned(T) --- type_integer_to_signed :: proc($T: typeid) -> type where type_is_integer(T), type_is_unsigned(T) ---
-
typeid
-
typeid. -
typeid_of($T: typeid) -> typeid.-
Strange.
-
-
Example :
-
Caio:
-
Why isn't this allowed?
id: typeid = f32 data: int = 2 log.debugf("thing: %v", cast(id)data) -
I'm trying to understand a bit more about typeid.
-
I've seen it being used as a compile time known constant in generic procedures,
$T: typeid, and in this case it can be used for casting? How does this work?
-
-
GingerBill:
-
Because
castis a compile time operation. -
What you are doing requires an run time operation which is very difficult to do.
-
-
Barinzaya:
-
A proc argument like
$T: typeidis parapoly , which means it's basically a generic/template argument. -
The compiler will generate a separate variation of the proc for every unique group of parapoly arguments it's called with.
-
Naturally, that means that the argument must be known at compile-time, so it can't be a variable.
-
-
Caio:
-
hmmm ok. So, a brief of what I was thinking of doing: I'm trying to store some data in a struct in its generic form, and then use some other data to cast it back to the original data. A
anystores exactly what I need: arawptrand a type, but I got confused about thetypeid. Is there a way to accomplish this operation?
-
-
Barinzaya:
-
You basically have to just type switch on the
anyand handle the cases that you care about, e.g. howfmthandles arguments: https://github.com/odin-lang/Odin/blob/38faec757d4e4648a86fb17a1fda0e2399a3ea19/core/fmt/fmt.odin#L3168.
base_arg := arg // is an any. base_arg.id = runtime.typeid_base(base_arg.id) // probably to avoid derivative types `my_int :: int`, something like that. switch a in base_arg { case bool: fmt_bool(fi, a, verb) case b8: fmt_bool(fi, bool(a), verb) case b16: fmt_bool(fi, bool(a), verb) case b32: fmt_bool(fi, bool(a), verb) case b64: fmt_bool(fi, bool(a), verb) case any: fmt_arg(fi, a, verb) case rune: fmt_rune(fi, a, verb) // etc }-
A
unionis usually better unless you really need to handle anything.anyis a pointer that doesn't behave like a pointer and is easy to misuse; aunionactually contains its value. Cases needing true generic handling are rare, usually for arbitrary (de)serialization and printing.
-
-
Jesse:
-
anyshould be avoided until all other alternatives have been explored. -
It is almost never the case that you really don't know what set of types some data could be.
-
-
size_of
-
Why do I get a different value for
size_of, betweenbar1andbar2?Vertex :: struct { pos: [2]f32, color: [3]f32, } foo :: proc(array: []$MEMBER) { // passing a `[]Vertex` as a parameter fmt.println(size_of(MEMBER)) // prints 20 bar1(MEMBER) bar2(MEMBER) } bar1 :: proc(member: typeid) { fmt.println(size_of(member)) // prints 8 } bar2 :: proc($member: typeid) { fmt.println(size_of(member)) // prints 20 }-
bar1is thetypeidofVertex, notVertex, so it's getting the size of atypeid. -
typeidis the type of types. It's a hash of the type's canonical name. At compile time the compiler knows what the underlying type is, so it'll use the type itself rather thantypeid. At runtime it can't know, so it'll be atypeid. -
Compile-time
typeids are effectively types (which is why you can do stuff likeproc ($T: typeid) -> T), whereas run-timetypeids are indeed just an ID (u64-sized).
-
any
-
any. -
Raw_Any. -
It is functionally equivalent to
struct {data: rawptr, id: typeid}with extra semantics on how assignment and type assertion works. -
The
anyvalue is only valid as long as the underlying data is still valid. Passing a literal to ananywill allocate the literal in the current stack frame.
Comparison
any
vs
union
-
anyis a topologically-dual to aunionin terms of its usage.-
Both support assignments of differing types (
anybeing open to any type,unionbeing closed to a specific set of types). -
Both support type assertions (
x.(T)). -
Both support
switch in.
-
-
The main internal difference is how the memory is stored.
-
A
anybeing open is a pointer+typeid, aunionis a blob+tag. -
A
uniondoes not need to store atypeidbecause it is a closed ABI-consistent set of variant types.
-
Structure
Raw_Any :: struct {
data: rawptr, // pointer to the data
id: typeid, // type of the data
}
@(require_results)
any_data :: #force_inline proc(v: any) -> (data: rawptr, id: typeid) {
return v.data, v.id
}
Storing data
-
It always stores a pointer to the data.
-
anyonly works by having a pointer to something. This something can be stored in the heap or on the stack. -
If the data is already stored somewhere the operation is more direct, otherwise a temp variable is created on the stack and a pointer to this temp variable is used instead.
-
The only way to make
anyhold a value that outlasts the stack, the value needs to be stored in the heap. This is needed as ananyonly stores a pointer to something; this indirection makes things a quite more annoying.
Loose examples
-
The value is already stored :
x: int = 123
a: any = x
// equivalent to
a: any = { data = &x, id = typeid_of(type_of(x)) }
x: ^int = new(123)
a: any = x
// equivalent to
a: any = { data = &x, id = typeid_of(type_of(x)) }
-
The value is not yet stored :
a: any = 123
// equivalent to
_tmp: int = 123 // variable created on the stack
a: any = { data = &_tmp, id = typeid_of(type_of(_tmp)) }
x: int = 123
a: any = &x
// equivalent to
_tmp: ^int = &x // variable created on the stack
a: any = { data = &_tmp, id = typeid_of(type_of(_tmp)) }
Storing a pointer to something on the stack
-
It's possible to get a pointer to the value (the value is stored):
-
Assigning implicitly:
-
x: int = 123; a: any = x-
xis a value on the stack. -
astoresa.data = &x, which is a pointer to the value on the stack .
-
-
x: ^int = &i; a: any = x-
iis a value on the stack. -
xstores a pointer to something on the stack. -
astoresa.data = &x, which is a pointer to something on the stack , to a pointer to something on the stack.
-
-
x := make([]int, 3); a: any = x-
xis a array slice on the stack, that stores a pointer to something on the heap. -
astoresa.data = &x, which is a pointer to the array slice on the stack , which then points to the heap. -
This is a really weird one, but
xis indeed on the stack, as mentioned by 'Barinzaya' and 'rats'.
-
-
x: ^int = new_clone(123); a: any = x-
xis a pointer to the heap. -
astoresa.data = &x, which is a pointer on the stack , to a pointer on the heap.
-
-
-
Assigning explicitly (storing directly into the
.datafield):-
x: ^int = &i; a: any = { data = x, id = typeid_of(int) }-
iis a value on the stack. -
xstores a pointer to something on the stack. -
astores a pointer to something on the stack . -
Note how an indirection is removed, when comparing to
x: ^int = &i; a: any = x.
-
-
-
-
It's not possible to get a pointer to the value (the value is not stored):
-
Assigning implicitly:
-
awill always storea.data = &_tmp, where_tmpis on the stack; therefore, it always stores a pointer to the stack , due to the indirection of&_tmp. -
a: any = 123-
123is a literal, not yet stored. -
_tmp: int = 123. -
astoresa.data = &_tmp, which is a pointer to something on the stack .
-
-
x: int = 123; a: any = &x-
&xis a pointer tox; the value is stored, but the pointer is not yet stored. -
_tmp: ^int = &x. -
astoresa.data = &_tmp, which is a pointer to something on the stack , to a pointer to something on the stack.
-
-
a: any = new_clone(123)-
new_clone(123)is a pointer to123on the heap; the value is stored on the heap, but the pointer is not yet stored. -
_tmp: ^int = new_clone(123). -
astoresa.data = &_tmp, which is a pointer to something on the stack , to a pointer to something on the heap.
-
-
-
Assigning explicitly (storing directly into the
.datafield):-
a: any = { data = &i, id = typeid_of(int) }-
Is the same case as
x: ^int = &i; a: any = { data = x, id = typeid_of(int) }, but removing the need forx.
-
-
-
Storing a pointer to something on the heap
-
It's possible to get a pointer to the value (the value is stored):
-
Assigning implicitly:
-
x := make([]int, 3); a: any = x[2]-
xis a array slice on the stack, that stores a pointer to something on the heap. -
astoresa.data = &x[2], which is a pointer to something on the heap . -
Even though
xis on the stack,x[i]is a value on the heap, so&x[i]is a pointer on the heap.
-
-
-
Assigning explicitly (storing directly into the
.datafield):-
x: ^int = new_clone(123, context.temp_allocator); a: any = { data = x, id = typeid_of(int) }-
xstores a pointer to something on the heap. -
astores a pointer to something on the heap.-
This is not the same as doing:
x: ^int = new_clone(123) a: any = x-
astores a pointer toxon the stack, which stores a pointer to something on the heap.
-
-
-
Note how
idneeds to beint, whilex: ^int. -
When unwrapping the data, we'll get an
int, not the original^int. -
The original
^intcan actually be retrieved by doing(cast(^int)a.data), instead of(cast(^int)a.data)^; this has to be done manually.-
The second option is done automatically by
a.(int). -
Doing something like
a.(^int)in this case will just cause a failure, as(cast(^^int)a.data)^is not valid; the data is not^^int, but^int.
-
-
If the original
^intis not retrieved, then the pointer is lost and the memory cannot be freed; to avoid this, this technique should use of arena allocators, such ascontext.temp_allocator. -
(2025-11-08)
-
I tested this and it worked correctly:
batch := new_clone(Batch(T){ index = i32(i), offset = i32(offset), data = data[offset:min(offset + max_batch_size, len(data))], }, context.temp_allocator) args[0] = { data = batch, id = typeid_of(Batch(T)) } -
-
-
-
-
It's not possible to get a pointer to the value (the value is not stored):
-
Assigning implicitly:-
This makes a
_tmpbe created, which will always be on the stack, so this is not possible if you want to store a pointer to something on the heap.
-
-
Assigning explicitly (storing directly into the
.datafield):-
a: any = { data = new_clone(123, context.temp_allocator), id = typeid_of(int) }-
Same case as
x: ^int = new_clone(123, context.temp_allocator); a: any = { data = x, id = typeid_of(int) }, but removing the need forx.
-
-
-
About array/slices with
any
-
Barinzaya:
-
A slice is a pointer and length, in
x := make([]int)xwould still be on the stack.
-
-
Rats:
-
Variables are always on the stack.
-
You can't have a "heap allocated variable", but you can have a variable holding a pointer into the heap.
-
-
Barinzaya:
-
That's what
xis. The actual data in the slice is behind the pointer, and can be anywhere (heap, stack, mapped file, static data, etc.) -
A slice is ultimately just a kind of pointer, it just points to an array of a variable number of things rather than just one thing
-
-
Caio:
-
is it possible to do something like
x := make([]int); a: any = x.data, so thea.data = &x.data, which then is a pointer to the heap?
-
-
Barinzaya:
-
Kind of, but you wouldn't be able to keep the length
-
That's basically what
a: any = x[0]would do -- it would store a pointer to the first element in the backing data. But it loses the length. -
If you knew the length, you could "rebuild" the slice, but
anywon't really help with that.
-
-
Caio:
-
so then, there's no way for me to store a whole array inside a
any?
-
-
Barinzaya:
-
You'd have to allocate the slice itself too
x_data := make([]int, 4) x := new_clone(x_data) a: any = x^-
But that means you need to handle
deleteing/freeing both levels of indirection. If you're getting to that point, maybe it's time to reconsider why you need that.
-
Getting the underlying value
-
(cast(^T)a.data)^is the same asa.(T). -
Barinzaya:
-
Also asserting the
id, but otherwise yes, they are the same.
-
-
Not possible:
-
(cast(^(a.id))a.data)^ -
or
-
a.(a.id) -
As the
.idid runtime known, not comp-time known.
-
Using
.()
My_Struct :: struct{
x: int,
y: intrinsics.Atomic_Memory_Order,
}
main :: proc() {
{
a: int = 123
b: any = a
c := b.(int)
fmt.printfln("a: %v, b: %v, c: %v", a, b, c)
}
{
a: [4]bool
b: any = a
c := b.([4]bool)
fmt.printfln("a: %v, b: %v, c: %v", a, b, c)
}
{
a := make([dynamic]My_Struct, context.temp_allocator)
append(&a, My_Struct{}, My_Struct{ 2, .Relaxed })
b: any = a
c := b.([dynamic]My_Struct)
fmt.printfln("a: %v, b: %v, c: %v", a, b, c)
}
{
a := make([dynamic]My_Struct, context.temp_allocator)
append(&a, My_Struct{}, My_Struct{ 2, .Relaxed })
b: any = a[:]
c := b.([]My_Struct)
fmt.printfln("a: %v, b: %v, c: %v", a, b, c)
}
}
-
a,bandchere are always printed the same, whilechas the type ofa.
Using
switch v in a {}
-
ais theanyvariable. -
vis the unwrapped value.
a: any = 123
switch v in a {
case int:
fmt.printfln("Is int. Value: %v", v)
// prints "Is int. Value: 123"
case []byte:
}
Using the
reflect
procedures
-
They do the same operation as shown, but fancier.
-
as_bool. -
as_bytes.@(require_results) as_bytes :: proc(v: any) -> []byte { if v != nil { sz := size_of_typeid(v.id) return ([^]byte)(v.data)[:sz] } return nil } -
as_f64. -
as_i64. -
as_u64. -
as_int. -
as_uint. -
-
Attempts to convert an
anyto arawptr. -
This only works for
^T,[^]T,cstring,cstring16based types.
// Various considerations first. result = (^rawptr)(any_value.data)^ -
-
-
Returns the equivalent of doing
raw_data(v)wherevis a non-any value
// Various considerations first. result = any_value.data -
Etc
Is
-
is_nil.-
Returns true if the
anyvalue is eithernilor the data stored at the address is all zeroed
-
Etc
-
deref.-
Dereferences
anyif it represents a pointer-based value (^T -> T)
-
-
-
Returns the name of enum field if a valid name using reflection, otherwise returns
"", false
-
-
equal.-
Checks to see if two
anyvalues are semantically equivalent
-
-
-
Returns the underlying variant value of a union. Panics if a union was not passed.
-
-
-
UNSAFE: Returns the underlying tag value of a union. Panics if a union was not passed.
-
-
index.-
Gets the value by an index, if the type is indexable. Returns
nilif not possible
-
Examples
-
See the example below about
typeids.
Primitive Types
bool
-
bool .
-
Has a size of 1
byte(b8).
bool
-
Other bools:
b8 b16 b32 b64-
"The only world where you would use one of these other bools is if you are making a binding for another language that has different sized bool types."
-
boolis equivalent tob8.
-
nil
-
Types that support
nil:-
rawptr -
any -
cstring -
typeid -
enum -
bit_set -
Slices
-
procvalues -
Pointers
-
#soaPointers -
Multi-Pointers
-
Dynamic Arrays
-
map -
unionwithout the#no_nildirective -
#soaslices -
#soadynamic arrays
-
rawptr
-
rawptr .
-
All pointers can implicitly convert to
rawptr.
integer
-
“natural” register size.
-
Is guaranteed to be greater than or equal to the size of a pointer.
-
When you need an integer value, you should default to using
intunless you have a specific reason to use a sized or unsigned integer type
int uint -
-
Specific sizes:
i8 i16 i32 i64 i128 u8 u16 u32 u64 u128 -
Pointer size:
uintptr -
Endian-specific integers:
// little endian i16le i32le i64le i128le u16le u32le u64le u128le // big endian i16be i32be i64be i128be u16be u32be u64be u128be
float
-
No need to use
fin front of the float.
f16 f32 f64
-
Endian-specific floating point numbers:
// little endian f16le f32le f64le // big endian f16be f32be f64be
rune
-
Signed 32-bit integer.
-
Represents a Unicode code point.
-
Is a distinct type from
i32.
rune
Math Types
Matrix
-
Matrix .
Creation
m: matrix[2, 3]f32
m = matrix[2, 3]f32{
1, 9, -13,
20, 5, -6,
}
Layout
Clarification
-
Rows and Columns begin at 0.
-
"column 1" means the 2nd column.
-
Same as an array.
Representation
[x, y]
-
The representation
m[x, y]is always the same regardless of the layout (column-major vs row-major).
// row 1, column 2
elem := m[1, 2]
Representation
[x]
-
Will return an array of the values in that column/row, whichever is major .
-
For column-major (default):
// column 1
elem := m[1]
-
For row-major (with
#row_major):
// row 1
elem := m[1]
Representation
[x][y]
-
m[x][y]is justm[x]and then indexing theyth value in the array. If the layout ofm[x]changes, so does this; in other words, this representation is affected by the layout. -
For column-major (default):
// column 1, row 2
elem := m[1][2]
-
For row-major (with
#row_major):
// row 1, column 2
elem := m[1][2]
Operations
-
matrix4_perspective-
Clip Space Z Range:
-
[-1 to +1], just like OpenGL.
-
-
Clip Space Y:
-
Y Up, just like OpenGL (Vulkan is Y Down).
-
-
Handedness:
-
If
flip_z_axisistrue:-
Right-handed coordinate system (camera forward is -Z).
-
-
If
flip_z_axisisfalse:-
Left-handed coordinate system (camera forward is +Z).
``
-
-
-
Quaternion
type
-
-
Is the set of all complex numbers with
f16/f32/f64real and imaginary (i,j, &k) parts.
-
quaternion64 quaternion128 quaternion256
-
-
fN->quaternion4N(e.g.f32->quaternion128) -
complex2N->quaternion4N(e.g.complex64->quaternion128)
-
Interpretation
-
"It's a bit odd that the value is an operation. It's just a very mathematical approach to it. It's basically an extension of how complex numbers are written mathematically, e.g.,
1 + 2i". -
"It's more just syntax sugar for setting its fields, I think".
rot: quaternion128 = quaternion(x=0, y=0, z=0, w=1) // arguments must be named, to avoid ambiguity
rot: quaternion128 = 1 + 0i + 0j + 0k // this is valid.
-
rot: quaternion128 = 1, same as1 + 0i + 0j + 0k.-
This is the identity quaternion.
-
-
rot: quaternion128 = 0, same as0 + 0i + 0j + 0k.
Procedures
-
Quaternion from X :
-
-
(real, imag, jmag, kmag: Float) -> (Quaternion_Type)
-
-
-
(f: f32) -> (quaternion128)
-
-
-
(m: matrix[3, 3]f32) -> (quaternion128)
-
-
-
(m: matrix[4, 4]f32) -> (quaternion128)
-
-
-
quaternion_from_matrix.... -
quaternion_from_scalar....
-
-
-
Quaternion from angle :
-
-
(angle_radians: f32, axis: [3]f32) -> (quaternion128)
-
-
Using
quaternion_angle_axis, specifying the axis automatically:-
quaternion_from_euler_angle_x.-
(angle: f32) -> (quaternion128)
-
-
quaternion_from_euler_angle_y.-
(angle: f32) -> (quaternion128)
-
-
quaternion_from_euler_angle_z.-
(angle: f32) -> (quaternion128)
-
-
-
Using
quaternion_from_euler_angle_x/y/z, specifying the operation order:-
quaternion_from_euler_angles.-
(t1, t2, t3: f32, order: Euler_Angle_Order) -> (quaternion128)
-
-
-
quaternion_from_pitch_yaw_roll.-
(pitch, yaw, roll: f32) -> (quaternion128) -
Interestingly, does not use any of the above procedures.
-
.
-
-
-
Quaternion from vectors3 :
-
quaternion_from_forward_and_up.-
(forward, up: [3]f32) -> (quaternion128)
-
-
-
(eye, centre: [3]f32, up: [3]f32) -> (quaternion128)
-
-
quaternion_between_two_vector3.-
(from, to: [3]f32) -> (quaternion128)
-
-
-
Quaternion to quaternion
-
-
(q: quaternion128, v: [3]f32) -> ([3]f32)
-
-
-
(a, b: quaternion128, t: f32) -> (quaternion128)
-
-
-
(x, y: quaternion128, t: f32) -> (quaternion128)
-
-
-
(q1, q2, s1, s2: quaternion128, h: f32) -> (quaternion128)
-
-
-
(quaternion128) -> (quaternion128)
-
-
-
(quaternion128) -> (quaternion128)
-
-
-
X from quaternion :
-
real. -
imag. -
jmag. -
kmag. -
conj. -
-
(q: quaternion128) -> (f32)
-
-
-
(q: quaternion128) -> (angle: f32, axis: [3]f32)
-
-
euler_angles_from_quaternion.-
(m: quaternion128, order: Euler_Angle_Order) -> (t1, t2, t3: f32)
-
-
-
(q: quaternion128) -> ([3]f32)
-
-
pitch_yaw_roll_from_quaternion.-
(q: quaternion128) -> (pitch, yaw, roll: f32)
-
-
-
(q: quaternion128) -> (f32)
-
-
-
(q: quaternion128) -> (f32)
-
-
-
(q: quaternion128) -> (f32)
-
-
Complex
type
-
complex .
complex32 complex64 complex128
Interpretation
-
"It's a bit odd that the value is an operation. It's just a very mathematical approach to it. It's basically how complex numbers are written mathematically, e.g.,
1 + 2i."
Procedures
Strings
-
Strings .
Types
-
string-
Used as default when doing type inference:
my_string := "hello". -
Stores the pointer to the data and the length of the string.
-
-
cstring-
"A little longer, with a 0 at the end".
-
Is used to interface with foreign libraries written in/for C that use zero-terminated strings.
-
Syntax
"string"
'rune'
`multiline_string`
Manipulation
import "core:strings"
-
If there is allocation,
deleteis used. -
Compare :
value: int = strings.compare("hello", "hi") -
Contains :
flag: bool = strings.contains("hello", "hi") // "hi" is in "hello" -
Concatenate :
my_string, err := strings.concatenate({"hello", "hi"}) defer delete(my_string) -
Upper :
my_string := strings.to_upper("hello") defer delete(my_string) -
Lower :
my_string := strings.to_lower("hello") defer delete(my_string) -
Cut :
-
"substring", "make the string smaller".
my_string, err := strings.cut("hello", 3, 5) // (string, first_idx, last_idx) defer delete(my_string) -
Slicing
-
Uses array slicing property.
my_str := "little cat"
sub_str := my_str[7:]
// `cat`
-
Depending on the char, it is useful to use the
"core:string"library to avoid the issues below. -
In the example below, ideally use "runes" instead of "bytes", since Japanese chars use 3 bytes per rune.
my_str := "imagine something Japanese"
sub_str := my_str[1:]
// issues
Prints
Formatting
-
aprint.-
aprintln -
aprintf -
aprintfln. -
Takes
anyand returnsstring.
-
-
tprint.-
tprintln. -
tprintf. -
tprintfln. -
Takes
anyand returnsint. -
Prints to
os.stdout. -
Allocates with the
temp_allocator.
-
-
bprint.-
bprintln. -
bprintf. -
bprintfln. -
Takes
[]u8and returnsstring.
-
-
sbprint.-
sbprintln. -
sbprintf. -
sbprintfln. -
Takes
^strings.Builderand returnsstring.
-
-
caprint.-
caprintln. -
caprintf. -
caprintfln. -
Takes
anyand returnscstring.
-
-
ctprint.-
ctprintln. -
ctprintf. -
ctprintfln. -
Takes
anyand returnscstring. -
Allocates with the
temp_allocator.
-
Writes to Terminal (os.std)
-
-
println. -
printf. -
printfln. -
Takes
anyand returnsint. -
Prints to
os.stdout.
-
-
-
eprintln. -
eprintf. -
eprintfln. -
Takes
anyand returnsint. -
Prints to
os.stderr.
-
-
-
Takes
anyand returnsvoid. -
Panics .
-
Write to File
-
fprint.-
fprintln. -
fprintf. -
fprintfln. -
fprint_type. -
fprint_typeid. -
Takes
os.Handleand returnsint. -
Writes to file.
-
Writes to io.Stream
-
wprint.-
wprintln. -
wprintf. -
wprintfln. -
wprint_type. -
wprint_typeid. -
Takes
io.Streamand returnsint(bytes written). -
Writes to stream.
-
Formatting
Pretty Formatting
-
Opt 1 :
fmt.printf( "Ping %d:\n" + " Client RTT: %vms (self-measured)\n" + " Server RTT: %vms (server's view of us)\n", i, time.duration_milliseconds(client_rtt), time.duration_milliseconds(pong.client_ping), // Server's estimate ) -
Opt 2 :
-
-
Examples:
A_LONG_ENUM = 54, // A comment about A_LONG_ENUM AN_EVEN_LONGER_ENUM = 1, // A comment about AN_EVEN_LONGER_ENUM+-----------------------------------------------+ | This is a table caption and it is very long | +------------------+-----------------+----------+ | AAAAAAAAA | B | C | +------------------+-----------------+----------+ | 123 | foo | | | 000000005 | 6.283185 | | | a | bbb | c | +------------------+-----------------+----------+ | AAAAAAAAA | B | C | |:-----------------|:---------------:|---------:| | 123 | foo | | | 000000005 | 6.283185 | | | a | bbb | c | -
-
Tags in Structs
Foo :: struct {
a: [L]u8 `fmt:"s"`, // whole buffer is a string
b: [N]u8 `fmt:"s,0"`, // 0 terminated string
c: [M]u8 `fmt:"q,n"`, // string with length determined by n, and use %q rather than %s
n: int `fmt:"-"`, // ignore this from formatting
}
Custom formatters
-
See
fmt/example.odin.
Escaping symbols
-
%%-
literal percent sign
-
-
{{-
literal open brace
-
-
}}-
literal close brace
-
Formatting Verbs
-
Using a verb in the wrong place does nothing, it just prints as if no formatting exists.
-
This is very strict. If it’s not in General, or the variable type, it won’t work.
-
-
General :
-
%v/{:v}-
The value in default format
Tilesets: [Tileset{uid = 21, texture = Texture{id = 5, width = 384, height = 160, mipmaps = 1, format = "UNCOMPRESSED_R8G8B8A8"}, tilesize = [32, 32], pivot = [0, 0]}] -
-
%w-
An Odin-syntax representation of the value
Tilesets: {Tileset{uid = 21, texture = Texture{id = 5, width = 384, height = 160, mipmaps = 1, format = PixelFormat.UNCOMPRESSED_R8G8B8A8}, tilesize = {32, 32}, pivot = {0, 0}}} -
-
%T-
An Odin-syntax representation of the type of the value
Tilesets: [dynamic]Tileset -
-
%#v-
An expanded format of %v with newlines and indentation
Tilesets: [ Tileset{ uid = 21, texture = Texture{ id = 5, width = 384, height = 160, mipmaps = 1, format = "UNCOMPRESSED_R8G8B8A8", }, tilesize = [ 32, 32, ], pivot = [ 0, 0, ], }, ] -
-
-
Boolean :
-
%t-
The word "true" or "false"
-
-
-
Integer :
-
%b-
base 2
-
-
%c/%r-
the character represented by the corresponding Unicode code point
-
-
%o-
base 8
-
Bytes.
-
-
%d/%i-
base 10
-
Decimal
-
Default for :
-
[]byte
-
-
-
%z-
base 12
-
-
%x-
base 16, lower-case a-f
-
Hexadecimal
-
-
%X-
base 16, upper-case A-F
-
Hexadecimal
-
-
%U-
Unicode format: U+1234; same as "U+%04X"
-
-
-
Floating-point , complex numbers , quaternions :
-
%e-
scientific notation, e.g. -1.23456e+78
-
-
%E-
scientific notation, e.g. -1.23456E+78
-
-
%f/%F-
decimal point, no exponent, e.g. 123.456
-
-
%g/%G-
synonym for %f with default max precision
-
-
%h-
hexadecimal (lower-case) with 0h prefix (0h01234abcd)
-
-
%H-
hexadecimal (upper-case) with 0H prefix (0H01234ABCD)
-
-
%m-
number of bytes in best unit, e.g. 123.45mib
-
-
%M-
number of bytes in best unit, e.g. 123.45MiB
-
-
Width and Precision :
-
Width
-
optional decimal number after '%'.
-
Default: enough to represent value.
-
-
Precision
-
after width, period + decimal number.
-
No period: default precision.
-
Period alone: precision 0.
-
-
Measured in Unicode code points (runes).
-
n.b. C's printf uses bytes.
-
Examples :
-
%f-
default width, default precision
-
-
%8f-
width 8, default precision
-
-
%.2f-
default width, precision 2
-
-
%8.3f-
width 8, precision 3
-
-
%8.f-
width 8, precision 0
-
-
-
-
-
String and slice of bytes :
-
%s-
uninterpreted bytes of string/slice
-
-
%q-
double-quoted string safely escaped with Odin syntax
-
-
%x-
base 16, lower-case, two chars per byte
-
-
%X-
base 16, upper-case, two chars per byte
-
-
-
Slice and dynamic array :
-
%p-
address of 0th element in base 16 (upper-case), with 0x
-
-
-
Pointer :
-
%p-
base 16, with 0x
-
-
%b,%d,%o,%z,%x,%X-
also work with pointers as integers
-
-
-
Enums :
-
%s-
name of enum field
-
-
%i,%d,%f-
also work as number
-
-
Flags
-
Ignored by verbs that don't expect them.
-
+-
always print a sign for numeric values
-
-
--
pad spaces on right (left-justify)
-
-
#-
Gives an alternative format.
-
%#b-
add leading 0b for binary
-
-
%#o-
add leading 0o for octal
-
-
%#z-
add leading 0z for dozenal
-
-
%#x/%#X-
add leading 0x or 0X for hexadecimal
-
-
%#p-
remove leading 0x for %p
-
-
%#m/%#M-
add a space between bytes and the unit of measurement
-
-
-
(space)-
leave a space for elided sign in numbers (% d)
-
-
0-
Pad with leading zeros rather than spaces
-
Rune
-
A
runeis just a character in a string. -
Represents a Unicode code point.
-
Signed 32 bit integer;
distinct i32. -
The default value is
0, as it's just ani32. -
They just work like numbers in most cases; well they are numbers.
-
For example, to lower a rune you can
unicode.to_lower(r), but you can also justr - 32if you're only dealing with ASCII.-
Supposedly.
-
-
-
Rune values are comparable and ordered.
Untyped Runes / Rune Literals
-
Can be used to define a
rune,u8,u16.
foo := 'x'
// ^ ^
// rune untyped rune
foo: u8 = 'x'
// ^ ^
// u8 untyped rune
// This is valid for UTF-8 runes, for UTF-16 use u16.
foo: u16 = 'x'
// ^ ^
// u16 untyped rune
if str[i] == '\n'
// is using a rune literal as a `u8`
Other usages
skip_whitespace :: proc(t: ^Tokenizer) {
for {
switch t.ch {
case ' ', '\t', '\r', '\n':
advance_rune(t)
case:
return
}
}
}
Maps (Hash Maps)
-
Maps .
-
Zero value of a map is
nil. Anilmap has no keys.
Maps Memory Layout
-
Always a copy
m: map[string]int = ... m2 := m // points to the same data as m m2["foo"] = 123 // m is not aware of the new key that was added--it's in the data, but m has the wrong length // worse, this could cause the map to reallocate, in which case m would point to freed memory delete(m2) // m is now definitely invalid -
Are you trying to remove the entire map entry? if so: https://pkg.odin-lang.org/base/builtin/#delete_key and then
deletethedeleted_key(anddeleted_entryif you allocated it) (edited) -
Be consistent with your keys in the map--like I said, clone them all (and then you know you should delete them all when you delete the map) or don't clone any (and then you know not to delete any, but you also need to be careful with what you insert).
-
strings just happen to be particularly annoying to deal with because they're pointers -
Allocator requirements :
-
Ginger Bill:
-
So the
maptype in Odin REQUIRES an allocator that can do 64-byte aligned allocations. -
What you'll need to do is change the alignment when initializing the dynamic arena:
dynamic_arena_init(&arena, alignment=64) -
This does mean every allocation is a bit wasteful, unfortunately.
-
But that's the problem of custom allocators and trying to treat them "generally" any way.
-
-
Create
-
Using
make:-
Uses the current
context.
m := make(map[string]int) -
-
Map literals:
m := map[string]int{ "Bob" = 2, "Chloe" = 5, }
Delete
-
Using
delete:delete(m)
Insert / update
m[key] = elem
Access
elem = m[key]
-
If an element for a key does not exist, the zero value of the element will be returned.
elem, ok := m[key] // `ok` is true if the element for that key exists
// “comma ok idiom”
//or
ok := key in m // `ok` is true if the element for that key exists
Remove element
delete_key :: proc(m: ^$T/map[$K]$V, key: $K) -> (deleted_key: $K, deleted_value: $V) {…}
Modify
Test :: struct {
x: int,
y: int,
}
m := map[string]Test{
"Bob" = { 0, 0 },
"Chloe" = { 1, 1 },
}
// Method 1
value, ok := &m["Bob"]
if ok {
value^ = { 2, 2 }
}
// Method 2
m["Bob"] = { 3, 3 }
// Method 3 (Forbidden)
m["Chloe"].x = 0
"Compound Literals"
-
To enable compound literals for
maps,#+feature dynamic-literalsmust be enabled per file. -
This is because dynamic literals will use the current
context.allocatorand thus implicitly allocate. -
The opt-in feature exists so that Odin does not implicitly allocate by default and give the user any surprises.
Container Calls
-
The built-in map also supports all the standard container calls that can be found with the dynamic array .
-
len(some_map)-
Returns the number of slots used
-
-
clear(&some_map)-
Clears the entire map - dynamically allocated content needs to be freed manually
-
-
cap(some_map)-
Returns the capacity of the map - the map will reallocate when exceeded
-
-
shrink(&some_map)-
Shrinks the capacity down to the current length
-
-
reserve(&some_map, capacity)-
Reserves the requested number of elements
-
Struct
-
Structs .
-
Default values are not allowed.
Structs with Parametric Polymorphism (Parapoly)
Table_Slot :: struct($Key, $Value: typeid) {
occupied: bool,
hash: u32,
key: Key,
value: Value,
}
slot: Table_Slot(string, int)
-
Example :
-
Odin-handle-map with
$HT: typeid:-
Caio:
-
I have a question about the odin-handle-map :
Handle_Map :: struct($T: typeid, $HT: typeid, $N: int) { // Each item must have a field `handle` of type `HT`. items: [N]T, num_items: u32, next_unused: u32, unused_items: [N]u32, num_unused: u32, }-
I don't understand the use of
$HT: typeid, theHTis not used inside this struct, so why is it there? Does it have the same influence from outside the struct?
-
-
Thag:
-
It's because it allows other procs to then infer the handle type based on the type of the map
-
i.e.
remove :: proc(m: ^Handle_Map($T, $HT), h: HT) -
notice how
HTcan be known from the type specified in the handle map definition
-
-
Caio:
-
so it's just type information for the handle it holds? I mean, if I were to make a
distincthandle?
-
-
Thag:
-
you're right, it's type info that is then used by other procs at compile time.
-
-
Chamberlain:
-
I've done something similar with my Vulkan abstraction, haha. Good to see someone else used poly this way too.
-
-
-
Subtype Polymorphism (Keyword
using
)
-
When using
usingon structs , this gives subtyping (inheritance). -
"It's like
embeddingin Go, but a little more explicit". -
Technically it is possible to "force" OOP via the use of "function tables" ("V tables", virtual tables) and using
usingto simulate inheritance:-
-
"just function pointers with a fancy name".
-
-
-
Using
usingin other places:-
In procedures :
-
Ginger Bill: "It was a mistake".
-
Teej: "I can see this being useful when using RayLib a lot inside a function and I just want to drop the
rl.". -
Ginger Bill: "I still think that's bad, don't use it. It's just 3 characters, it's not worth it".
-
Ginger Bill: "I regret adding this as a feature, because it only leads to unreadable spaghetti code. Try not to use it, this is a mistake.".
-
-
In file scopes:-
Not possible.
-
Ginger Bill: "I disallowed using
usingat the file scope, because it makes it harder to understand where the code is coming from".
-
-
Struct Memory Layout
#packed
-
Removes padding between fields that is normally inserted to ensure all fields meet their type’s alignment requirements.
-
The fields remain in source order.
-
Useful where the structure is unlikely to be correctly aligned (the insertion rules assume it is ), or if space savings are more important than access speed.
-
Accessing a field in a packed struct may require copying the field out into a temporary location, or using a machine instruction that doesn’t assume the pointer is correctly aligned, to be performant or avoid crashes on some systems. (See
intrinsics.unaligned_load.)
struct #packed {...}
#aligned(N)
-
Specifies that the struct will be aligned to
Nbytes. -
This applies to the struct itself, not its fields.
-
Fields remain in source order.
-
Can also be applied to a union .
struct #align(4) {...}
#raw_union
-
Struct’s fields will share the same memory space, similar to
unions in C. -
All fields share the same offset (
0). -
Useful especially for bindings.
struct #raw_union {...}
Equivalence
-
Arrays :
Vec3 :: [3]f32 Vec3 :: struct { x: f32, y: f32, z: f32, } -
Matrices
Matrix4x4 :: #row_major matrix[4, 4]f32 Matrix4x4 :: struct { m11, m12, m13, m14: f32, m21, m22, m23, m24: f32, m31, m32, m33, m34: f32, m41, m42, m43, m44: f32, }
Reflect
Struct
Struct Fields
-
Type_Info :: struct { size: int, align: int, flags: Type_Info_Flags, id: typeid, variant: union { // etc }, } -
reflect.struct_field_value_by_name(any, string, bool) -> any. -
-
Represents information of a struct field
Struct_Field :: struct { name: string, type: ^runtime.Type_Info, tag: Struct_Tag, offset: uintptr, // in bytes is_using: bool, }-
reflect.struct_fields_zipped(typeid) -> soa array Struct_Field.-
Returns the fields of a struct type
Tas an#soaslice. -
Useful for iteration.
-
-
reflect.struct_field_by_name(typeid, string) -> Struct_Field. -
reflect.struct_field_value(any, Struct_Field) -> any.field := struct_field_value_by_name(the_struct, "field_name") value_by_field := struct_field_value(the_struct, field)
-
-
-
Represents the
stringtype of a struct field. -
By convention, tags are concatenations of optionally space-separated key:"value" pairs. Each key is non-empty and contains no control characters other than space, quotes, and colon.
Struct_Tag :: distinct string-
reflect.struct_tag_lookup(Struct_Tag, string) -> (string, bool).-
Returns the value associated with a key in the tag string.
-
-
reflect.struct_tag_get(Struct_Tag, string) -> string.-
Wrapper around
struct_tag_lookupignoring theokvalue.
-
-
Other tags
#no_copy
-
This tag can be applied to a
structto forbid copies. The initialization of a#no_copytype must be implicitly zero, a constant literal, or a return value from a call expression.
Mutex :: struct #no_copy {
state: uintptr,
}
main :: proc() {
m: Mutex
v1 := m // This line will raise an error.
p := &m
v2 := p^ // So will this line.
}
Union
Unions with Parametric Polymorphism (Parapoly)
Error :: enum {Foo0, Foo1, Foo2}
Param_Union :: union($T: typeid) #no_nil {T, Error}
r: Param_Union(int)
r = 123
r = Error.Foo0
Union Casting
-
Limitations with Pointer Casting :
-
Caio:
-
how do I convert a 'pointer to a value', to a 'pointer to a union'? I'm doing
cast(^My_Union)value, wherevalue: ^$T, withTbeing a generic parameter for a procedure, but I'm getting an "index out of bounds" error while trying to cast a[2]f32
-
-
Blob:
-
you can't, unions are
<data><tag>. meaning they're bigger than their types. the tag is just an index into an array in the RTTI. -
except for unions with only a single pointer type
union{^T}where the tag is dropped & the nil check just checks if the pointer is nil.
-
-
-
Union type :
fmt.printfln("%T", my_union)
// or
fmt.printfln("%v", typeid_of(type_of(my_union)))
-
Unwrapped Union type :
fmt.printfln("%v", reflect.union_variant_typeid(my_union))
Type check
Via
value.(T)
Value :: union {
bool,
i32,
f32,
string,
}
v: Value
v = "Hellope"
// type assert that `v` is a `string` and panic otherwise.
s1 := v.(string)
// type assert but with an explicit BOOLEAN check. This will not panic.
s2, ok := v.(string)
if !ok {
// problem encountered.
}
Via Switch Statement
-
A type switch allows several type assertions in series.
-
A type switch is like a regular switch, but the cases are types (not values).
-
For a union, only the union's types are allowed as case types.
value: Value = ...
switch v in value {
case string:
#assert(type_of(v) == string)
case bool:
#assert(type_of(v) == bool)
case i32, f32:
// This case allows multiple types, therefore we cannot know which type to use
// `v` remains the original union value
#assert(type_of(v) == Value)
case:
// Default case
// In this case, it is `nil`
}
Maybe
-
Maybe .
-
A union which either returns a type
Tornil. In other languages, often seen asOption(T),Result(T), etc. -
Not used much, as Odin supports multiple return values.
halve :: proc(n: int) -> Maybe(int) {
if n % 2 != 0 do return nil
return n / 2
}
half, ok := halve(2).?
if ok do fmt.println(half) // 1
half, ok = halve(3).?
if !ok do fmt.println("3/2 isn't an int")
n := halve(4).? or_else 0
fmt.println(n) // 2
Bit Sets
-
bit_set[_bitset_type_; _backing_type_] -
Bit Sets .
Creation
Direction :: enum{North, East, South, West}
Direction_Set :: bit_set[Direction]
Char_set :: bit_set['A'..='Z']
Int_Set :: bit_set[0..<10] // bit_set[0..=9]
u32_set: bit_set[u32(0)..<32]
// If you don't use u32(0), the range created will be `int`, even though the backing type is `u32`.
// Weird.
Underlying type
-
If a bit set requires a specific size, the underlying integer type can be specified:
Char_Set :: bit_set['A'..='Z'; u64]
#assert(size_of(Char_Set) == size_of(u64))
-
The underlying type is not the same thing as the type of the bitset:
unique_sets: bit_set[u32(0)..<32]
// This is a u32 bit_set
unique_sets: bit_set[0..<32; u32]
// This is an int bit_set, with u32 as backing type
// Weird.
unique_sets: bit_set[u32(0)..<32; u32]
// This is a u32 bit_set, with u32 as backing type
Evaluation
-
Bit Set vs Elements :
-
e in A- set membership (A contains element e) -
e not_in A- not set membership (A does not contain element e)
-
-
Bit Set vs Bit Set :
-
A + B- union of two sets (equivalent toA | B) -
A - B- difference of two sets (A without B’s elements) (equivalent toA &~ B) -
A & B- intersection of two sets -
A | B- union of two sets (equivalent toA + B) -
A &~ B- difference of two sets (A without B’s elements) (equivalent toA - B) -
A ~ B- symmetric difference (Elements that are in A and B but not both) -
A == B- set equality -
A != B- set inequality -
A <= B- subset relation (A is a subset of B or equal to B) -
A < B- strict subset relation (A is a proper subset of B) -
A >= B- superset relation (A is a superset of B or equal to B) -
A > B- strict superset relation (A is a proper superset of B)
-
-
card.-
card(bit_set)returns how many1s there are in thebit_set. -
Cardinality = popcount = number of 1s.
-
Ex:
unique_sets: bit_set[u32(0)..<32; u32] for ubo in glsl_reflect.ubos { unique_sets += { ubo.set } } for tex in glsl_reflect.textures { unique_sets += { tex.set } } set_layouts = make([]Shaders_Set_Layout, card(unique_sets), allocator)
-
Operations
-
Union of
.WINDOW_RESIZABLEand.WINDOW_ALWAYS_RUN.rl.SetWindowState({ .WINDOW_RESIZABLE, .WINDOW_ALWAYS_RUN }) -
Toggle flag:
-
~=
-
Discussions
-
TLDR : "Bitset 0 means 'activate the first bit".
-
Caio:
-
For this bitset below, if I set
my_flags = {}, would that mean it's the same as settingmy_flags = { .INDIRECT_COMMAND_READ }?
AccessFlags2 :: distinct bit_set[AccessFlag2; Flags64] // distinct u64 AccessFlag2 :: enum Flags64 { INDIRECT_COMMAND_READ = 0, INDEX_READ = 1, VERTEX_ATTRIBUTE_READ = 2, // .. } -
-
Barinzaya:
-
No.
AccessFlags2is effectively an integer (Flags64, specifically), where its bits are used to indicate the presence of the enum values. The numeric value of theenumvariants correspond to which bit in that integer is used to represent them. So bit 0 (1 << 0 == 1) is used to indicate whetherINDIRECT_COMMAND_READis set, bit 1 (1 << 1 == 2) is used to indicate whetherINDEX_READis set, and so on. So{}is an integer0internally,{.INDIRECT_COMMAND_READ}would be an integer1internally
-
-
Caio:
-
So, in my head I had this idea of a "bit mask", so for example if I set it to
3, then it would mean thatINDEX_READandVERTEX_ATTRIBUTE_READwere active and zero, would actually be zero, with nothing active -
Is it just a different concept? bit mask != bit set? I used bit masks before, so that's the model I have in my head
-
-
Barinzaya:
-
It's the same idea, I think you just have it shifted by 1.
INDIRECT_COMMAND_READoccupies a bit too (edited) -
So in "raw" bit masks, it would be
INDIRECT_COMMAND_READ = 1 // 1 << 0 (bit 0) INDEX_READ = 2 // 1 << 1 (bit 1) VERTEX_ATTRIBUTE_READ = 4 // 1 << 2 (bit 2) FOURTH_ONE = 8 // 1 << 3 (bit 3) FIFTH_ONE = 16 // 1 << 4 (bit 4)-
The value in the
AccessFlag2enum is just a bit index , not a mask -
The
bit_sethandles turning it into a mask.
-
Arrays
Alternatives
-
kit !!:
-
"you might want to consider just using a hashset, if the order isn't important and the array is relatively large".
-
-
Karl Zylinski:
-
It's usually a sign of poor design. Better have an index or handle around and remove using index directly. Any time my code removes by finding the index by element value, then it is a code smell to me.
-
Caio: but every time the array shifts, all indexes stored would have to be updated, no? and what would you use as a handle in this case?
-
Don't remove stuff. Use a free list.
-
Common Operations
Removing
-
-
Faster, but can change the order of elements.
unordered_remove(&dyn_arr, idx) -
-
-
Doesn't change the order of elements.
ordered_remove(&dyn_arr, idx) -
Info
-
len.-
The
lenbuilt-in procedure returns the length ofvaccording to its type:-
Array: the number of elements in v.
-
Pointer to (any) array: the number of elements in
v^(even ifvisnil). -
Slice, dynamic array, or map: the number of elements in
v; ifvisnil,len(v)is zero. -
String: the number of bytes in
v -
Enumerated array: the number elements in v.
-
Enum type: the number of enumeration fields.
-
#soaarray: the number of elements inv; ifvisnil,len(v)is zero. -
#simdvector: the number of elements inv.
-
-
For some arguments, such as a string literal or a simple array expression, the result can be constant.
-
-
cap.-
The
capbuilt-in procedure returns the length ofvaccording to its type:-
Array: the number of elements in v.
-
Pointer to (any) array: the number of elements in
v^(even ifvisnil). -
Dynamic array, or map: the reserved number of elements in
v; ifvisnil,len(v)is zero. -
Enum type: equal to
max(Enum)-min(Enum)+1. -
#soadynamic array: the reserved number of elements inv; ifvisnil,len(v)is zero.
-
-
For some arguments, such as a string literal or a simple array expression, the result can be constant.
-
Fixed Arrays (
[n]T
)
-
Similarity to structs :
-
Fixed arrays are equivalent to a struct with a field for each element.
-
They are just a number of values in a row in memory.
-
Creation and Assigning
some_ints: [7]int
// With inferred size.
some_ints := [?]int{1, 2, 3, 4, 5}
favorite_animals := [?]string{
// Assign by index
0 = "Raven",
1 = "Zebra",
2 = "Spider",
// Assign by range of indices
3..=5 = "Frog",
6..<8 = "Cat"
}
some_ints[0] = 5
some_ints[3] = 40
some_ints = {5, 4, 3, 1, 2, 98, 100}
// Since the size is defined as 7, 7 elements must be given.
x := [5]int{1, 2, 3, 4, 5}
for i in 0..=4 {
fmt.println(x[i])
}
Iterate
for element in some_ints {
}
for element, idx in some_ints {
}
for &element in some_ints {
element *= 2
}
Copy
some_ints: [3]f32 = {1, 2, 3}
some_ints2 := some_ints
-
Modifying array 2 will not modify array 1, and vice versa.
-
"No shared memory between fixed arrays".
Small Array
-
Pretty neat.
-
Basically it's a Fixed Array with an API similar to a Dynamic Array.
-
I found it really cool.
-
The
Skeletonuses this, as a reference.
Enumerated Arrays (
[enum]int
)
-
Think of it as a Fixed Array.
-
Even though we don't supply the size, the array will be the size of the enum.
Create
// Enum
Nice_People :: enum {
Bob,
Klucke,
Tim
}
// Method 1
nice_rating := [Nice_People]int {
.Bob = 5,
.Klucke = 7,
.Tim = 3,
}
// Method 2: all zeroes
nice_rating := [Nice_People]int
// Method 3: partial initialization
nice_rating := #partial [Nice_People]int {
.Klucke = 7,
}
Access
bob_niceness := nice_rating[.Bob]
Slices (
[]T
)
Creation
-
Via Fixed Array :
a := [7]int{ 5, 4, 3, 1, 2, 98, 100 } // Left side: Fixed Array. // Right side: Array Literal. b := a[2:5] // Left side: Slice. -
Via Slice Literal :
-
Implicitly creates a stack fixed array, and then get a reference to it.
a := []int{ 1, 6, 3 } // Left side: Slice. // Right side: Slice Literal. -
-
Zero valued :
-
The zero value of a slice is
nil. A nil slice has a length of 0 and does not point to any underlying memory. Slices can be compared againstniland nothing else.
a: []int if a == nil { fmt.println("a is nil!") } -
-
Via
rawptrand length :-
From
base:runtime->internal.odin
@(private) byte_slice :: #force_inline proc "contextless" (data: rawptr, len: int) -> []byte #no_bounds_check { return ([^]byte)(data)[:max(len, 0)] } -
Batch assignment
-
Heap allocated slice :
-
NOT WHAT YOU WANT :
a := make([]int, 4, context.allocator) a = { 10, 20, 30, 40 }-
This replaces the heap allocated slice with a stack slice.
-
-
Using
copy:a := make([]int, 4, context.allocator) copy(a, []int{ 10, 20, 30, 40 }) -
Using
slice.clone:a := slice.clone([]int{ 10, 20, 30, 40 }, context.allocator)
-
Destruction
-
You only need to delete the slice if the underlying value is heap allocated.
-
You could delete the original
[dynamic]intor the[]int; both will delete the same memory.
a := make([dynamic]int, context.allocator)
b := a[:]
delete(b)
// Will delete the underlying memory of `a` and `b`, as both point to heap memory.
// or
a := make([]int, context.allocator)
delete(a)
Access range
// Everything
a[:]
// From idx 3 to the end
a[3:]
// From start to idx 5
a[:5]
Memory
-
Their length is not known at compile-time.
-
Slices are like references to arrays; they do not store any data.
-
Internally, a slice stores a pointer to the data and an integer to store the length of the slice.
-
"A window into the array".
-
-
A slice always points to memory, which can be in the stack or heap. If it's in the stack, no need for manual memory management, but if it's in the heap, we can use the address stored by the slice to free its memory, the same as done by a
[dynamic]byte. -
I can expand a
[dynamic]byte, but not a[]byte, since a dynamic array has an allocator, and slices don't. Both of them point to memory, but slices can only free it, while[dynamic]bytecan free and expand. -
No allocation is done when slicing.
-
This means it is bound to the array from which the slice was made.
-
For this reason, it is preferable to pass slices as procedure parameters.
-
-
Converting Array Slice to Dynamic Array :
some_ints3 := slice.to_dynamic(some_ints2) // array slice to dynamic array -
Allocate memory :
// Method 1 some_ints3 := slice.clone(some_ints2) // array slice to array slice // Method 2: with `make` some_ints3 := make([]int)-
Delete :
-
If the slice has its own memory, then it is necessary to free this memory afterward:
delete(some_ints3)
-
-
Dynamic Arrays (
[dynamic]T
)
Creation
// Without make
dyn_arr: [dynamic]int
// With make
dyn_array := make([dynamic]int, context.temp_allocator)
dyn_array := make([dynamic]int, 5, 10, context.temp_allocator) // len 5, cap 10
// With core:bytes lib.
b: bytes.Buffer
bytes.buffer_init_allocator(&b, 0, 2048) // len 0, cap 2048
bytes.buffer_write_string(&b, "my string")
Destruction
delete(dyn_array)
Clear
clear(&dyn_array)
Appending
-
append.-
n: intsymbolizes the number of elements appended.
append(&dyn_arr, 5) // dyn_arr[0] is now 5x: [dynamic]int append(&x, 123) append(&x, 4, 1, 74, 3) // append multiple values at once y: [dynamic]int append(&y, ..x[:]) // append a slice-
Memory considerations when resizing :
skeleton_add_joint :: proc(skeleton: ^Skeleton, parent_joint_idx: int, pos: eng.Vec2, rot: f32 = 0, scale: f32 = 1.0, name: string = "") -> int { if eng.error_assert(skeleton != nil) do return INVALID_JOINT parent_joint := &skeleton.joints[parent_joint_idx] // !! This is invalidated if the append below causes a resize. append(&skeleton.joints, Joint{ pos = pos, rot = rot, scale = scale, name = name, parent = parent_joint, skeleton = skeleton, }) joint_idx := len(skeleton.joints) - 1 append(&parent_joint.children, joint_idx) return joint_idx }-
Barinzaya:
-
If the dynamic array is full when you append something, then it'd need to resize to add one. That may cause the backing memory to move.
-
You're taking
parent_jointbefore youappendto the same array, soparent_jointmay be invalid after the append. -
Also, you're storing pointers to the array in
Joint(theparentfield). Those can also become invalid when the dynamic array resizes
-
-
Caio:
-
So, what you are saying is that: if I have a pointer to the array, and the array resizes, then I'm screwed? I shouldn't store a pointer to an element in an array?
-
-
Barinzaya:
-
Basically, yes. Though specifically when it resizes as in reallocates (i.e.
capchanges).appending to an array withlen == capwill cause it to reallocate, for instance -
That can possibly resize in place, but unless you know the specifics of the allocator it's using, you shouldn't rely on it.
-
If it moves, the whole array will move (i.e. the
[dynamic]Jointwill point somewhere else) -
The
[dynamic]Jointstruct itself won't move, it's still firmly in your struct--but the array's actual data is behind a pointer, and that can move.
-
-
SaiMoen:
-
Unless you know the pointer is stable because the backing allocator wouldn't move it (e.g. virtual arena w/ .Static where the only thing using it is the dynamic array).
-
-
Caio:
-
Is there a rule of thumb for dealing with this situations? some safe design I could use?
-
-
Barinzaya:
-
Store array indices, rather than pointers, and any time you
append, assume that any pointers you got before theappendare now invalid. -
If you were to individually allocate each
Joint(i.e. usingnew), then resizing the array wouldn't move theJoints themselves, just its array of pointers; by using[dynamic]^Joint. -
The other option, if you know how many
Joints there will be, is toreservethe dynamic array ahead of time. If it never needs to resize, then you won't have an issue. You'd just need to be careful to make sure that it doesn't, in fact, resize. Indices would probably be less error-prone. -
"have you looked at relative.Ptr? whether thats an alternative".
-
It would work if all of the pointers are within the same array as they seem to be here, yeah. Though array indices would probably be simpler and more debuggable.
-
-
It's handy to have a fixed master array of entities that never changes and serves as a reference, and then you can manipulate seperate arrays of pointers (sorting, growing, etc). However this is not cache-local so it depends on your perf requirements.
-
-
-
-
-
inject
-
-
-
?
-
Iterate
for element in dyn_array {
}
for element, idx in dyn_array {
}
for &element in dyn_array {
element *= 2
}
Memory
-
Cache :
-
Location :
-
Stored in the heap.
-
They don't "hold" the memory, but actually just point to the address in memory where it is allocated.
-
-
Allocator :
-
Is where the data the pointer points to comes from and where it goes to realloc.
-
-
Interacting with Slices :
-
When you slice a dyn array like
my_dyn_array[:], the slice's pointer and len will be the same as themy_dyn_array.-
Because it's the same pointer, when you go to delete it the allocator knows which allocation you want to free.
-
In other words, freeing the array slice means that the original
my_dyn_arrayis freed, as they both point to the same thing.
-
-
-
Growth :
-
"Grows when the capacity is equal to the length ".
-
It's possible to use a different allocator to make the array grow:
// Method 1: change the allocator used by the array. dyn_arr: [dynamic]int dyn_array.allocator = context.temp_allocator append(&dyn_array, 5) // Method 2: use `make` when creating the array dyn_array := make([dynamic]int, context.temp_allocator) // Method 3: change the default allocator of the context (not recommended) dyn_arr: [dynamic]int context.allocator = context.temp_allocator append(&dyn_array, 5)
-
-
Copying :
-
Correct method :
dyn_array: [dynamic]int append(&dyn_array, 5) dyn_array2 := slice.clone_to_dynamic(dyn_array[:]) -
Incorrect method :
-
Be careful!
dyn_array: [dynamic]int append(&dyn_array, 5) dyn_array2 := dyn_array-
The second array points to the same location as the first array.
-
This is extremely error-prone, since appending to the first array will grow it, but the second array will not grow; things like that.
-
It will probably crash.
-
-
-
-
Alternatives :
-
Via
Buffersfrom the 'core:bytes' library :-
Loads information that may or may not be useful:
-
off: int-
I believe it represents the offset from where reading stopped.
-
This is used everywhere, so if something has been read, it is excluded from all future operations, including
bytes.buffer_to_bytes. -
Fortunately, this is quite explicit when reading the library's procs.
-
-
last_read: Read_Op-
Flags for the last thing read.
Read_Op :: enum i8 { Read = -1, Invalid = 0, Read_Rune1 = 1, Read_Rune2 = 2, Read_Rune3 = 3, Read_Rune4 = 4, } -
-
-
Not intended for sorting, element removal, etc.
-
Obviously possible, since it's fundamentally just a
[dynamic]T, but it's not the focus.
-
-
-
Multi-pointer (
[^]T
)
-
Multi-pointers are just C arrays.
-
See C#Arrays for better understanding.
-
-
"The name may be subject to change."
-
-
The type
[^]Tis a multi-pointer to T value(s). -
Used in :
-
Describe
foreign(C-like) pointers which act like arrays (pointers that map to multiple items). -
It is precisely what makes up a Raw_Cstring .
-
A Raw_String is almost the same thing, but contains a length.
-
-
-
Zero Value :
-
nil.
-
Operations
x: [^]T = ...
x[i] -> T
x[:] -> [^]T
x[i:] -> [^]T
x[:n] -> []T
x[i:n] -> []T
Re-allocating
-
Caio:
-
hello, if
image_pixel: []byteandimage_data: [^]u8, how can I allocate the values from theimage_dataintoimage_pixels? I'm doing the following, but I'm getting a UAF.
size := image_get_size(extent, format) image_pixels = make([]byte, size, allocator) image_pixels = image_data[:size] -
-
Barinzaya:
-
All
image_pixels = image_data[:size]is doing is changingimage_pixelsto point toimage_data's data, it's not actually copying the data. It sounds like you wantcopy(image_pixels, image_data[:size]) -
copyis a built-in procedure that copies elements from a source slice/stringsrcto a destination slicedst. The source and destination may overlap. Copy returns the number of elements copied, which will be the minimum of len(src) and len(dst).
-
raw_data()
-
Interacting with Multi-Pointers is easiest using the builtin
raw_data()call which can return a Multi-Pointer.-
raw_data .
-
Returns the underlying data of a built-in data type as a multi-pointer.
-
b := [?]int{ 10, 20, 30 }
a: [^]int
fmt.println(a) // <nil>
a = raw_data(b[:])
fmt.println(a, a[1]) // 0x7FFCBE9FE688 20
Discussion
-
Discussion 1 :
-
Is
raw_data(my_array_ptr)the same as&my_array_ptr[0], iflen(my_array_ptr) > 0? I find that using&my_array_ptr[0]is a bit more intuitive when something asks for a[^], does it make sense?-
Mostly.
&my_array[0]will invoke bounds checking on slices/dynamic arrays whereasraw_datawon't (and will just return the slice's pointer directly). The types are technically different, but^Tconverts to[^]Tso that often doesn't matter
-
-
so, no harm no foul on using
&myarray[0]? That makes things much easier to understand-
Should be fine if you know it won't be empty, but it may incur a bounds check. Otherwise, the result will be the same
-
-
-
Discussion 2 :
property_count: u32 vk_check(vk.EnumerateInstanceExtensionProperties(nil, &property_count, nil)) properties := make([]vk.ExtensionProperties, property_count) vk_check(vk.EnumerateInstanceExtensionProperties(nil, &property_count, raw_data(properties))) fmt.printfln("property_count: %v, properties: %v", property_count, properties)-
Shouldn't I never use a multi-pointer directly, but only use
raw_datato interface with a foreign api?-
yeah you normally don't need a multipointer
-
physical_device_properties := vk.PhysicalDeviceProperties2{ sType = .PHYSICAL_DEVICE_PROPERTIES_2 } vk.GetPhysicalDeviceProperties2(device, &physical_device_properties)-
just earlier I had to create an array slice and pass it with a
raw_data, but now I'm just using a pointer to a struct, why is that?-
They're equivalent.
-
Both a pointer and a slice are assignable to a multi-pointer.
-
Multi-pointers are just C arrays.
-
-
Interfaces / Methods / VTables
-
The only reason I would organize data into a struct, instead of keeping them loose, would be POLYMORPHISM.
-
Ways to have different systems for the same type of data:
-
Function Pointers inside the struct, with different implementations.
-
Reminds of methods, but:
-
The procedure is not private, nor anything.
-
Doesn't interact with any constructor or destructor.
-
Not part of any high-level abstraction concept.
-
Can be changed at runtime.
-
-
In other words, it's better than a method.
-
-
~Generics.
-
Doesn't solve the problem. I don't want a generic procedure, but a completely different implementation of a procedure.
-
-
~Procedure Overloading.
-
Doesn't solve the problem. I don't want a dynamic dispatcher that judges the object type and calls a different procedure.
-
-
-
Cases where this could be useful :
-
ECS :
-
Reminds me of ECS concepts, where I could use a struct and call the struct with its own procedure.
-
Probably not exactly ECS, but it allows for SOA usage.
-
-
Create of User_Character / NPC_Character / Creature.
-
Destroy of User_Character / NPC_Character / Creature.
-
PrePhysics of User_Character / NPC_Character / Creature.
-
PostPhysics of User_Character / NPC_Character / Creature.
-
Draw of User_Character / NPC_Character / Creature.
-
DrawCanvas of User_Character / NPC_Character / Creature.
-
Scene.
-
This could remove the use of switches for: init, deinit, input, physics, draw.
-
The same goes for change_scene.
-
-
Note :
-
Technically this can be used in many places, BUT, I should only use it if I feel there's value in polymorphism.
-
-
-
-
"just function pointers with a fancy name".
-
Use of "function tables" ("V tables", virtual tables) with
usingin structs to achieve inheritance.
-
Operator
->
-
The
->operator can be used to inject a pointer to itself as the first parameter of the procedure. -
As the
->operator is effectively syntactic sugar , all of the same semantics still apply, meaning subtyping throughusingwill still work as expected to allow for the emulation of type hierarchies. -
The
->syntax is being abused in the 2nd option to mimic UFCS , when it was initially implemented mostly for C++ Component Objective Model (COM) pattern and Objective C code interop.
x->y(123)
// is equivalent to
x.y(x, 123)
Discussion
-
Many procedures symbolizing init, deinit, update, and draw for a scene. Each scene holds an enum value to know which scene to play at a given moment. When switching to a different scene, I will have to use a
switchto properly call the deinit procedure for thelast_scene, and use aswitchto call init for thenew_scene.-
Packing the abstraction into control flow.
-
-
Many procedures symbolizing init, deinit, update, and draw for a scene, but the scene is now a struct holding function pointers to each of its systems. When switching to a different scene, I just call
last_scene.deinit(last_scene)orlast_scene->deinit(), wheredeinitis the function pointer.-
Packing the abstraction into memory.
-
-
You can have a
[Scene_Enum]Deinit_Procthat contains each deinit proc for each Scene, and just use the scene enum to call the proc.-
Instead of attaching methods to types, group all operations by behavior.
typedef void (*DrawFunc)(void *); DrawFunc draw_funcs[MAX_ENTITY_TYPE]; draw_funcs[ENTITY_PLAYER] = PlayerDraw; draw_funcs[ENTITY_ENEMY] = EnemyDraw; -
-
Subtype polymorphism with runtime type-safe down-casting .
-
Just a selector with Enum.
-
-
Performance :
-
I got a bit worried that option 2 could be bad for a system like a scene or entity, as these function pointers are called EVERY FRAME, for every entity (in cases where I use stuff like this for entities). Am I overthinking performance here? I mean, C++ seems to do that in the end, so is it a bad thing for performance?
-
Opinions :
-
I've seen all of them in practice, and they really are nothing but syntactic choices. I doubt they'd impact performance that much. But that is just my uneducated guess.
-
Depends on how the CPU behaves, best to benchmark if you really care although I doubt it will matter much. Also an uneducated guess.
-
Source .
-
Casey:
-
Well, I guess the thing I would ask is, what is the benefit of making an "object" any more formal than just the struct and some functions? You already have the ability through function overloading to change which random number API you are using by changing the type (random_series to random_series_2 or whatever). So what is the benefit of making an "object" out of it?
-
If the answer is polymorphism, well, yeah, at that point you have a vtable situation and things start to get a lot more expensive, because the random number generation can no longer be inlined, for example - it always has to be a function call. If the answer is something else, what is that something else?
-
Yeah, ML and friends do type inference in a much better way, without making you do all kinds of template nonsense and so on. But there's a whole other set of things you have to worry about if you go that direction. It would have been nice if C++ had introduced a happy medium, but of course they always do the worst possible thing so they didn't :(
-
ML :
-
Stands for MetaLanguage — a family of functional programming languages that includes:
-
Standard ML (SML)
-
OCaml
-
F (influenced by ML, part of the .NET ecosystem)
-
Caml (precursor to OCaml)
-
-
These languages are known for:
-
Powerful static type systems
-
Type inference: The compiler can deduce the types of most expressions without requiring explicit type annotations.
-
Immutable data structures by default
-
Strong support for pattern matching, algebraic data types, and functional abstractions
-
-
-
He is contrasting ML-style type inference (clean, automatic, minimal boilerplate) with C++ templates, which:
-
Often require verbose and complex syntax
-
Have poor error messages
-
Do not integrate cleanly with the rest of the type system
-
Are Turing-complete but hard to control (template metaprogramming)
-
-
-
-
Ginger Bill:
-
I use Go(lang) a lot at work and Go interfaces can be useful. In the io package, there are a few very useful interfaces: Reader, Writer, ReadWriter, WriterAt, WriterTo, ReaderAt, ReaderFrom.
-
Interfaces are implicit so all you have to do is implement the functions for that type and it will automatically behave as that interface. I don't use interfaces that often as I usually just use structures and functions for most things but they are useful when you need a generic function.
-
I do believe that they are implemented as vtables internally which can be a problem.
-
I know that in C++17, they will/might implement concepts which act very similar but I do not know if they will solve it. I do not know how C++17 concepts are implemented nor have I ever had the chance to use them; so I cannot comment.
-
-
-
VTables (Virtual Tables)
-
A Vtable is:
-
A table of function pointers.
-
Each class with virtual functions has its own vtable.
-
Each object of that class contains a hidden pointer to its class’s vtable (commonly the first pointer in the object's memory layout).
-
-
When a virtual method is called, the compiler emits code that:
-
Looks up the function pointer in the vtable.
-
Indirectly calls that function through the pointer.
-
-
Why VTables Can Be Problematic :
-
Performance Overhead
-
No inlining :
-
Virtual function calls can't be inlined because the exact function isn't known at compile time.
-
In C++, Virtual functions disable inlining unless compiler devirtualizes.
-
-
Indirect branch :
-
Every call goes through an extra pointer dereference, which introduces a pipeline stall or branch prediction failure on modern CPUs.
-
-
Cache misses :
-
Function pointers may not be in cache, leading to further delays.
-
-
-
Hidden Complexity
-
VTables are often invisible in source code in C++. You don’t explicitly write the table — the compiler generates it.
-
Every polymorphic object gets a hidden vtable pointer.
-
-
This leads to less control and transparency, especially when debugging or optimizing.
-
Harder Debugging
-
Debugging virtual dispatch is more difficult because the function being called isn’t directly visible in code.
-
Tools must inspect vtable pointers and offsets to determine the actual call target.
-
-
-
Binary Size and ABI Fragility
-
Every virtual function adds a pointer to the vtable.
-
Changing the vtable layout breaks binary compatibility (ABI), which is a concern in shared library design.
-
-
-
Calling functions inside classes via the function address stored in the VTable .
-
This is used to do VTable swapping. Somehow this is used for hacking.
-
Error Handling
-
It reminds me of Go.
f, err := os.open("my_file.txt")
if err != os.ERROR_NONE {
// handle error
}
defer os.close(f)
// rest of code
Definitions
-
core/os/errors.odin
Panics
-
assert.-
Can be ignored with
ODIN_DISABLE_ASSERT. -
Closes the program.
-
-
ensure.-
Cannot be ignored with
ODIN_DISABLE_ASSERT. -
Is stronger than
assert. -
Closes the program.
-
-
panic.-
Closes the program.
-
-
-
?
-
-
-
Does not close the program.
-
My implementations
@(require_results)
error_assert :: proc(eval: bool, loc := #caller_location, expr := #caller_expression(eval)) -> bool {
if !eval {
log.errorf("%v(%v): %v", loc.procedure, loc.line, expr)
}
return !eval
}
// Thematically it's the same as assert.
fatal_assert :: proc(eval: bool, loc := #caller_location, expr := #caller_expression(eval)) {
if !eval {
log.fatalf("%v(%v): %v", loc.procedure, loc.line, expr)
runtime.trap()
// Crashes the app.
}
}
Context
Critiques
-
(2025-12-12)
-
I'm not a fan of context at all .
-
To even print you need to have a
contextdefined. -
This doesn't compile:
// "contextless" procedure { fmt.printfln("TEST") } -
This works just fine .
// "contextless" procedure { context = runtime.default_context() context.allocator = mem.panic_allocator() context.temp_allocator = mem.panic_allocator() fmt.printfln("TEST") } -
It really doesn't matter if everything is not even initialized, or the allocators are defined to
panic_allocator. -
Let's go through the checklist:
-
Why do I NEED to define a
context?-
Because the proc is defined with the
:: proc()calling convention, indicating that it NEEDS acontext.
-
-
Why does it NEED a
context?-
Because internally
fmt.printflncallswprintf, which usesassert, andassertrequires acontext. -
A LOT of the other procedures inside the chain are defined with the
:: proc()calling convention without even needing it. -
In this chain, ONLY
assertrequirescontext, and no other procedure.
-
-
Why does
assertNEEDcontext?-
So it can "print as user configured" by the
context.assertion_failure_proc.
-
-
Why do we NEED a
context.assertion_failure_proc?-
We don't.
-
The purpose of this assertion procedure is to "use the same assert procedure as configured by the user":
assert :: proc(condition: bool, message := #caller_expression(condition), loc := #caller_location) { if !condition { @(cold) internal :: proc(message: string, loc: Source_Code_Location) { p := context.assertion_failure_proc if p == nil { p = default_assertion_failure_proc } p("runtime assertion", message, loc) } internal(message, loc) } } -
But the idea doesn't actually work a lot of the time, as this happens:
-
base:runtime/default_temp_allocator_arena.odin-
This is not using the "assertion procedure" defined by the USER, just the default one.
context = default_context() context.allocator = allocator mem_free(block_to_free, allocator, loc) -
-
base:runtime/print.odin-
This is not using the "assertion procedure" defined by the USER, just the default one.
println_any :: #force_no_inline proc "contextless" (args: ..any) { context = default_context() loop: for arg, i in args { assert(arg.id != nil) if i != 0 { print_string(" ") } print_any_single(arg) } print_string("\n") } -
-
-
There are a lot of uses of
runtime.default_context()in thebaseandcorelibrary, while it's also suggested to useruntime.default_context()for"c"and"contextless"calling conventions. Every time you do it, you lose the reason forcontextbeing invented in the first place. -
"The main purpose of the implicit context system is for the ability to intercept third-party code and libraries and modify their functionality. One such case is modifying how a library allocates something or logs something" - implicit context system .
-
Expect when this is not the case.
-
-
Could this function be
contextless?-
Yes.
-
assert_contextlessalready solves this by:assert_contextless :: proc "contextless" (condition: bool, message := #caller_expression(condition), loc := #caller_location) { if !condition { @(cold) internal :: proc "contextless" (message: string, loc: Source_Code_Location) { default_assertion_contextless_failure_proc("runtime assertion", message, loc) } internal(message, loc) } } -
But if you want customization defined by the user, just:
assert_contextless :: proc "contextless" (condition: bool, message := #caller_expression(condition), loc := #caller_location) { if !condition { @(cold) internal :: proc "contextless" (message: string, loc: Source_Code_Location) { if global_assertion_failure_procedure_defined_by_the_user != nil { global_assertion_failure_procedure_defined_by_the_user("runtime assertion", message, loc) } else { default_assertion_contextless_failure_proc("runtime assertion", message, loc) } } internal(message, loc) } } main :: proc() { runtime.global_assertion_failure_procedure_defined_by_the_user = my_assertion_procedure }-
Different from the way
contextis used, in this case you ACTUALLY get the user-defined assertion procedure, with a fallback if not defined. -
With
contextthe code may or may not use the assertion procedure defined by you, but with this code above, your assertion procedure will ALWAYS be used, WITHOUT NEEDING A CONTEXT!!
-
-
-
-
Fun fact,
assertdoesn't actually care if thecontext.assertion_failure_procwas defined. If not defined, it just falls back to thedefault_assertion_failure_proc:assert :: proc(condition: bool, message := #caller_expression(condition), loc := #caller_location) { if !condition { @(cold) internal :: proc(message: string, loc: Source_Code_Location) { p := context.assertion_failure_proc if p == nil { // <-- note here p = default_assertion_failure_proc } p("runtime assertion", message, loc) } internal(message, loc) } } -
So the first snippet is technically the same as the one below:
{ context = {} fmt.printfln("TEST") } -
"Ok, but in this case the allocators are not
panic_allocators, just "nil" ({ data = nil, procedure = nil }), so this might crash if you try to allocate right?"-
Nope. If
allocator.procedure == nil, it just doesn't allocate without returning any errors. You might not even realize you don't have acontext.allocatorandcontext.temp_allocatordefined. You'll get a nil pointer without returned errors. The code just silently allows this.mem_alloc_bytes :: #force_no_inline proc(size: int, alignment: int = DEFAULT_ALIGNMENT, allocator := context.allocator, loc := #caller_location) -> ([]byte, Allocator_Error) { assert(is_power_of_two_int(alignment), "Alignment must be a power of two", loc) if size == 0 || allocator.procedure == nil { return nil, nil } return allocator.procedure(allocator.data, .Alloc, size, alignment, nil, 0, loc) } -
This is a whole different discussion about explicitness, but I thought I mentioned.
-
-
My point is:
-
A lot of procedures REQUIRE
contextwhen they shouldn't. They don't actually need it and it just creates bloated and visually confusing code. -
I believe that ALL fields from the
contextcould be defined as thread local global variables customizable by the user, and:: proc()should becontextlessby default, while having the wholecontextsystem removed. -
"But what about
context.allocatorandcontext.temp_allocatorthat are so used around all the libs?"-
Instead of
context.allocator, just useruntime.allocator. -
Instead of
context.temp_allocator, just useruntime.temp_allocator. -
Both
runtime.allocatorandruntime.temp_allocatorwould be allocators automatically initiated right before_startup_runtime(), just likeruntime.default_context()does, but this time without compromising all libraries by demanding thatcontextbe used.
-
-
"What about logger??"
-
Same idea, instead of
context.logger, just uselog.logger.
-
-
"What if I want something only for a scope, to then go back to the previous thing?"
context.allocator = runtime.heap_allocator() context.user_index = 456 { context.allocator = my_custom_allocator() context.user_index = 123 } assert(context.user_index == 456)-
First off, this is weird. I don't think it's obvious for anyway at first how
context.user_index == 456when it was just defined as1232 lines above. -
Secondly, if you really want a scope thing, just do:
allocator := runtime.heap_allocator() { scope(&allocator, {}) assert(allocator == {}) } assert(allocator == runtime.heap_allocator()) @(deferred_out=_scope_thingy_end) scope :: proc(old_value: ^mem.Allocator, new_value: mem.Allocator) -> (old_value_out: ^mem.Allocator, previous_value: mem.Allocator) { previous_value = old_value^ old_value^ = new_value return old_value, previous_value } _scope_end :: proc(old_value: ^mem.Allocator, previous_value: mem.Allocator) { old_value^ = previous_value }-
It would be useful if we could use polymorphic parameters with
deferred_out, but I don't really mind as I never usedcontextthis way anyway. -
I mean, it's really just an auxiliary variable, it shouldn't be that big of a problem. At least the variable changing when exiting the scope is much more obvious than the implicit way
contextdoes.
-
-
"Finally, what about cache locality?"
-
I'm not completely sure about this one. I imagine that many of the fields inside context wouldn't care that much, as there's an indirection inside every allocator, logger, etc, but that would have to be profiled. Anyway, I would imagine it pays off for not having to carry a ~196 bytes struct around for every function call.
-
Laytan:
-
We've done a test to determine if thread local context is faster than passing it as a param and found the difference negligible.
-
-
-
Usages
-
This is the usages I could find by
ctrl+shift+Fon the whole Odin repository:Context :: struct { allocator: Allocator, // Everywhere. temp_allocator: Allocator, // Everywhere. assertion_failure_proc: Assertion_Failure_Proc, // Used in `assert`, `panic`, `ensure`, `unimplemented`. // Used in `fmt` as: `assertf`, `panicf`, `ensuref`. // Used in `log` as: `assert`, `assertf`, `ensure`, `ensuref`. random_generator: Random_Generator, // Used in `math/rand`, `encoding/uuid` logger: Logger, // `core:log` is imported for `core:text/table`, `vendor:fontstash`, `vendor:nanovg/gl`. // `context.logger` is used directly only once in `core:mem` (doesn't make any sense, tbh). user_ptr: rawptr, // Not used anywhere. user_index: int, // Not used anywhere. _internal: rawptr, // Not used anywhere, except in 1 Cpp script. }
context.allocator
-
For “general” allocations, for the subsystem it is used within.
-
Is an OS heap allocator .
context.temp_allocator
-
For temporary and short lived allocations, which are to be freed once per cycle/frame/etc.
-
Assigned to a scratch allocator (a growing arena based allocator).
Init
-
base:runtime->core.odin
@private
__init_context :: proc "contextless" (c: ^Context) {
if c == nil {
return
}
// NOTE(bill): Do not initialize these procedures with a call as they are not defined with the "contextless" calling convention
c.allocator.procedure = default_allocator_proc
c.allocator.data = nil
c.temp_allocator.procedure = default_temp_allocator_proc
when !NO_DEFAULT_TEMP_ALLOCATOR {
c.temp_allocator.data = &global_default_temp_allocator_data
}
when !ODIN_DISABLE_ASSERT {
c.assertion_failure_proc = default_assertion_failure_proc
}
c.logger.procedure = default_logger_proc
c.logger.data = nil
c.random_generator.procedure = default_random_generator_proc
c.random_generator.data = nil
}
-
Using
context = {}-
(2025-12-12) I'm not sure how this looks as of today.
Threading
-
A new context is created using
runtime.default_context()if not context is specified when callingthread.create_and_start. -
The new context will maybe clean up its
context.temp_allocator.-
Tetra, 2023-05-31:
-
If the user specifies a custom context for the thread, then it's entirely up to them to handle whatever allocators they're using.
-
-
// core:thread
_select_context_for_thread :: proc(init_context: Maybe(runtime.Context)) -> runtime.Context {
ctx, ok := init_context.?
if !ok {
return runtime.default_context()
}
/*
NOTE(tetra, 2023-05-31):
Ensure that the temp allocator is thread-safe when the user provides a specific initial context to use.
Without this, the thread will use the same temp allocator state as the parent thread, and thus, bork it up.
*/
when !ODIN_DEFAULT_TO_NIL_ALLOCATOR {
if ctx.temp_allocator.procedure == runtime.default_temp_allocator_proc {
ctx.temp_allocator.data = &runtime.global_default_temp_allocator_data
}
}
return ctx
}
// core:thread
_maybe_destroy_default_temp_allocator :: proc(init_context: Maybe(runtime.Context)) {
if init_context != nil {
// NOTE(tetra, 2023-05-31): If the user specifies a custom context for the thread,
// then it's entirely up to them to handle whatever allocators they're using.
return
}
if context.temp_allocator.procedure == runtime.default_temp_allocator_proc {
runtime.default_temp_allocator_destroy(auto_cast context.temp_allocator.data)
}
}
// core/thread/thread_windows.odin:41 / core/thread/thread_unix.odin:54
_create :: proc(procedure: Thread_Proc, priority: Thread_Priority) -> ^Thread {
// etc
{
context = _select_context_for_thread(init_context)
defer {
_maybe_destroy_default_temp_allocator(init_context)
runtime.run_thread_local_cleaners()
}
t.procedure(t)
}
//etc
}
Memory
-
Odin does not have a Garbage Collector (GC).
Assignment
Copy
-
"
a = bmakes a copy?"-
It copies
bitself, but ifbis (or contains) a pointer, the data behind that pointer won't get cloned.
-
-
Pointers :
-
Pointers aren't magical. They're values that (can) point to other values.
-
-
Maps :
-
Keys and values are always copied.
-
-
Procedures :
-
Parameters:
-
Are always passed by copy.
-
-
Returns:
-
Copy or move?
-
-
Size
-
The word
sizeis used to denote the size in bytes . -
The word
lengthis used to denote the count of objects. -
size_of.-
This is evaluated at compile-time .
-
Takes an expression or type, and returns the size in bytes of the type of the expression if it was hypothetically instantiated as a variable.
-
The size does not include any memory possibly referenced by a value.
-
Slice :
-
This would return the size of the internal slice data structure and not the size of the memory referenced by the slice.
-
-
Struct :
-
Return size includes any padding introduced by field alignment (if not specified with
#packed).
-
-
Other types follow similar rules.
-
-
-
This is evaluated at runtime .
-
Returns the size of the type that the passed typeid represents
-
Memory Leaks
-
If the procedure does not free the memory automatically, then everything that had memory allocated must be returned from the procedure, otherwise we'll have a memory leak .
-
While inside a procedure, if I create something on the heap I should always return its pointer and not its value.
-
If you return by value, you're returning it on the stack; except any pointers that value may contain
-
The only way to reference allocated memory is by pointer (note that slices,
strings, etc., have pointers internally, so those count)
If a procedure allocates internally
-
Options:
-
Pass an allocator as one of the parameters and return the object that will need freeing.
-
Requiring an allocators is important to avoid "implicit allocations", which remove the agency from the user and makes easier to get memory leaks by accident, as it's not obvious something needs to be freed unless you read through the procedure implementation. C sometimes does this, which is bad.
-
Returning the allocated objects is a must to avoid memory leaks, otherwise the "handle" to the allocation is lost and you'll likely get a memory leak, unless the allocation was made using a Arena allocator, or similar, so by freeing the arena everything allocated with it is freed.
-
-
Create the object outside and pass the pointer to the object as a parameter.
-
This is a way to avoid a new object being created inside the procedure. It just modifies an existing object, which will know to delete.
-
-
Examples
-
Will leak.
create_data :: proc(allocator: mem.Allocator) -> (data: Data) { data_ptr := new(Data, allocator = allocator) data_ptr^ = { .. something } data = data_ptr^ return } my_data := create_data(context.allocator) free(&my_data)-
data = data_ptr^copies the data from the allocation back to the stack, and then the pointer to the allocation is forgotten.
-
-
Will not leak:
create_data :: proc(allocator: mem.Allocator) -> (data_ptr: ^Data) { data_ptr = new(Data, allocator = allocator) data_ptr^ = { .. something } return } my_data_ptr := create_data(context.allocator) free(my_data_ptr)
Stack-Use-After-Return
Pointer to a pointer on the stack
-
If I have a procedure that does
x: ^int = new_clone(123), if I return&x, is this a stack-use-after-return bug? -
Pointer or not,
xis still a local variable, so&xwould be a pointer to a pointer on the stack, yes. The thing thatxpoints to , however, is not.
Examples
x_proc :: proc() -> ^int {
x_value: int = 123
return &x_value
// `x` is a value stored in the stack, while `&x` is a pointer to a value stored in the stack; this is invalid.
// Compiler Error: It is unsafe to return the address of a local variable ('&x_value') from a procedure, as it uses the current stack frame's memory
}
a_proc :: proc() -> ^int {
a_slice := make([]int, 4, context.temp_allocator)
a_slice[2] = 30
return &a_slice[2]
// `a_slice[2]` is a value stored in the heap, while `&a_slice[2]` is a pointer to a value stored in the heap, so it's fine.
}
b_proc :: proc() -> (a: any) {
b_slice := make([]int, 4, context.temp_allocator)
b_slice[2] = 30
return b_slice[2]
// `b_slice[2]` is a value stored in the heap, while `a: any = &b_slice[2]` which is a pointer to a value stored in the heap, so it's fine.
}
c_proc :: proc() -> (a: any) {
c_slice := make([]int, 4, context.temp_allocator)
c_slice[2] = 30
return &c_slice[2]
// `&c_slice[2]` is a pointer to a value stored in the heap, but `any` created an implicit indirection with `_tmp`.
// So, this ends up being `c.data = &_tmp`, where `_tmp` is in the stack of `c_proc`, so this is invalid.
}
main :: proc() {
a := a_proc()
fmt.printfln("a: %v", a) // prints an address
fmt.printfln("a^: %v", a^) // prints '30'
b := b_proc()
fmt.printfln("b: %v", b) // prints '30'
fmt.printfln("b.(): %v", b.(int)) // prints '30'
fmt.printfln("b.data: %v", b.data) // prints an address
// fmt.printfln("b.data^: %v", b.data^) // Not possible to dereference rawptr.
c := c_proc()
fmt.printfln("c: %v", c) // Invalid. This is accessing invalid memory; ASan doesn't crash, but it should.
fmt.printfln("c^: %v", c.(^int)) // Invalid. This is accessing invalid memory; ASan doesn't crash, but it should.
// Barinzaya: I wonder if `any`s aren't integrated with ASan.
}
Use-After-Free (UAF)
Rules against UAF
-
I should not create an object inside a procedure and store its address somewhere.
-
As soon as the procedure ends, its address will no longer exist.
-
Even if you return the object by address, its address will change once it leaves the procedure's stack.
-
See the example 'Question: Tracking allocator doesn't work' for more explanation.
-
Address after free
-
Doesn't change...
int_ptr := new(int)
fmt.println(int_ptr) // 0x262CC7B6518
free(int_ptr)
fmt.println(int_ptr) // 0x262CC7B6518
Question: Tracking allocator doesn't work
-
Caio:
track := init_tracking_allocator() init_tracking_allocator :: proc() -> mem.Tracking_Allocator { track: mem.Tracking_Allocator mem.tracking_allocator_init(&track, context.allocator) context.allocator = mem.tracking_allocator(&track) return track } -
Barinzaya:
-
Changes to
contextare scoped , so afterinit_tracking_allocatorreturns,context.allocatordoes not change in the caller. -
The
Allocatorcontains a pointer to the underlying allocator data, which is on the stack ininit_tracking_allocator(i.e. it's&track) and would no longer be valid after that proc returns. Returning it will move it, and invalidate the pointer.-
any time you use
&on a local variable, the resulting pointer is only valid until the proc that variable is in returns. When you hand out a pointer (i.e. tomem.tracking_allocator), you need to be aware of how long that pointer needs to remain valid, and make sure that it's long enough
-
-
-
Caio:
-
wow, that sounds crazy hard to debug, I mean, sure with practice that comes natural, but how can I check for a reference to a pointer used like that? That's been my question for today. What I mean is, I wish there was a way to make such bugs not silent, because for what it seems, it just corrupts the data without giving any indication of such. There was a lot of suggestions to use a debugger or address sanitization, but both this suggestions require me to be actively looking for something, and what scares me is that I'm not good enough with memory to know when this will happen.
-
-
Tekk:
-
i saw a project like toybox actually use this behavior to record the beginning of the stack, so this kind of bug isnt something a compiler can check for without being extremely annoying. just like in rust, you can cause memory leaks by forgetting the root node of a linked list; the compiler has no idea what's your intention behind that.
-
plus, maybe youre creating a small buffer on the stack, so you might actually want an address to a local variable to pass to a procedure
-
-
Barinzaya:
-
It does become natural with practice, but there's just no sure-fire way to catch use-after-free issues that doesn't require actively looking for them. Odin is unmanaged, and memory is, fundamentally, just a large array of bytes, it has no concept of who owns it or what it contains beyond "bytes". The higher-level concepts that we're used to are just a matter of how those bytes are treated
-
The trick often comes down to just making your own life easier. Keep things in arrays, rather than separate allocations, use arenas for things that you know have a limited life-time (particularly deeply-nested structures that you know you'll destroy all at once, but can also be good for e.g. "I won't need this after this frame ends", for instance). These practices are better not only for you to keep track of, but less work for the CPU to do as well
-
-
jason:
-
I can attest to what Barinzaya said. It does become natural. I can write an entire program start to finish without making that mistake or really giving it any thought.
-
-
Aunt Esther:
-
I may not understand your issue totally, but if you set up ASAN correctly it catches all the below. Not sure there is anything left to check for. Multi-pointer bounds checks are not covered since you are in C-like territory there, but use those with extreme caution and usually for FFI. IMO most people do not understand these four points and options for ASAN use on windows with Odin -- you CAN use ASAN to detect:
-
Heap variable use after free (UAF) -- Odin does not detect this at compile or runtime.
-
Stack (local) variable use after free (UAF) at compile time for local intermediate variables assigned to local variables addresses of pointy local variables, e.g. taking the address of an indexed local variable like a local fixed array and assigning to another local variable for return -- Odin does not catch these at compile OR runtime. (note, Odin will error at compile time for local pointer type variables (e.g. fixed array) that directly have their address used as a return value). Historically stack UAF can sometimes prevent false positives (hence the default false setting), but so far it has not in my experience with Odin.
-
For all stack UAF detection, you have to set the ASAN variable
detect_stack_use_after_returntotruebefore you compile (default value iffalseotherwise) - see below for an example build command to actuate these stack UAF features. -
Compile time bounds checks for runtime type violations on both the stack and heap. Odin will catch this class of bugs only at runtime. For example a called proc accesses a runtime variable out of bounds.
-
-
Here is an example Odin compiler build command for windows
-
set ASAN_OPTIONS=detect_stack_use_after_return=true & odin run . -debug -warnings-as-errors -sanitize:address -vet-unused-variables -vet-unused-imports -vet-shadowing -vet-style -strict-style -vet-semicolon -out:output.exe
-
-
The above for UAF and bounds checks, plus a debugger (for pinpointing) and the tracking allocators (for leaks and bad/double frees) should cover a lot.
-
-
Correct code:
track := init_tracking_allocator()
context.allocator = mem.tracking_allocator(&track)
init_tracking_allocator :: proc() -> mem.Tracking_Allocator {
track: mem.Tracking_Allocator
mem.tracking_allocator_init(&track, context.allocator)
return track
}
Memory: Address
Pointers
-
A pointer is an abstraction of an address , a numberic value representing the location of an object in memory. That object is said to be pointed to by the pointer. To obtain the address of a pointer, cast it to
uintptr. -
When an object's values are read through a pointer, that operation is called a load operation. When memory is written to through a pointer, that operation is called a store operation. Both of these operations can be called a memory access operation .
Implementation
-
Symbol
^. -
No "pointer arithmetic".
-
The zero value of a pointer:
nil.
Multi-pointer
-
A multi-pointer is a pointer that points to multiple objects. Unlike a pointer, a multi-pointer can be indexed, but does not have a definite length.
Slice
-
A slice is a pointer that points to multiple objects equipped with the length, specifying the amount of objects a slice points to.
Implicit Dereference
-
Pointer to a struct :
v := Vector2{1, 2} p := &v p.x = 1335 fmt.println(v)-
We could write
p^.x, however, it is nice not to have to explicitly dereference the pointer. -
This is very useful when refactoring code to use a pointer rather than a value, and vice versa.
-
-
Pointer to an array :
ptr_to_array[index] == ptr_to_array^[index]
Implicit pointer to the stack
a := &My_Struct{}
// Is equivalent to
_a := My_Struct{} // not actually named, just for the example's sake
a := &_a
Zero by default
-
Whenever new memory is allocated, via an allocator, or on the stack, by default Odin will zero-initialize that memory, even if it wasn't explicitly initialized. This allows for some convenience in certain scenarios and ease of debugging, which will not be described in detail here.
-
However zero-initialization can be a cause of slowdowns, when allocating large buffers. For this reason, allocators have
*_non_zeroedmodes of allocation that allow the user to request for uninitialized memory and will avoid a relatively expensive zero-filling of the buffer.
Alignment
-
-
This is evaluated at compile-time .
-
Takes an expression or type, and returns the alignment in bytes of the type of the expression if it was hypothetically instantiated as a variable
v.-
I guess this means
-
-
It is the largest value
msuch that the address ofvis always0 mod m.-
All this effectively means "the address of
vis always a multiple ofm"; souintptr(&t) % align_of(t) == 0. -
This also implies
size_of(T) % align_of(T) == 0, which means that "the size is a multiple of the alignment". -
Notation
a ≡ r (mod m):-
≡ (mod m)means equality up to a multiple ofm. -
Is read as:
ais congruent tor modulo m.-
Two numbers are congruent modulo m if they give the same remainder when divided by
m.
-
-
Formally, it means:
mdivides(a - r). -
Or equivalently:
a − r = k⋅mfor some integerk. -
Ex :
-
17 ≡ 5 (mod 12)-
17−5=12, which is a multiple of12.
-
-
a ≡ 0 (mod m)-
a − 0 = a, soamust be divisible bym.
-
-
-
-
-
-
reflect.align_of_typeid(typeid) -> int.-
This is evaluated at runtime .
-
Returns the alignment of the type that the passed typeid represents.
-
Memory: Allocators
Allocator :: struct {
procedure: Allocator_Proc,
data: rawptr,
}
-
Allocators, Linear Allocators, Fragmentation, Stack Allocators - Nic Barker .
-
To improve the fragmentation from Linear Allocators, the memory region is divided by blocks.
-
Memory that is all together, with sequentially increasing addresses as Contiguous .
-
Why use allocators
-
In C and C++ memory models, allocations of objects in memory are typically treated individually with a generic allocator (The
mallocprocedure). Which in some scenarios can lead to poor cache utilization, slowdowns on individual objects' memory management and growing complexity of the code needing to keep track of the pointers and their lifetimes. -
Using different kinds of allocators for different purposes can solve these problems. The allocators are typically optimized for specific use-cases and can potentially simplify the memory management code.
-
For example, in the context of making a game, having an Arena allocator could simplify allocations of any temporary memory, because the programmer doesn't have to keep track of which objects need to be freed every time they are allocated, because at the end of every frame the whole allocator is reset to its initial state and all objects are freed at once.
-
The allocators have different kinds of restrictions on object lifetimes, sizes, alignment and can be a significant gain, if used properly. Odin supports allocators on a language level.
-
Operations such as
new,freeanddeleteby default will usecontext.allocator, which can be overridden by the user. When an override happens all called procedures will inherit the new context and use the same allocator. -
We will define one concept to simplify the description of some allocator-related procedures, which is ownership. If the memory was allocated via a specific allocator, that allocator is said to be the owner of that memory region. To note, unlike Rust, in Odin the memory ownership model is not strict.
Notes
-
There are some allocator requirements for
maps; see the Maps (Hash Maps) section. -
"Arenas and Dynamic Allocators together can sometimes be inefficient".
-
I didn't fully understand the concept.
Implicit Allocator Usage
For
context.allocator
-
runtime.default_allocator()-
Only used if the
context.temp_allocatoris not manually initialized.
-
-
runtime.heap_allocator().-
Used a lot around
os2andos. -
TEMP_ALLOCATOR_GUARD-
if the
context.temp_allocatoris not manually initialized.
-
-
os2._env: [dynamic]stringin theos2/env_linux.odin. -
os2.get_args()/os2.delete_args() -
os2.file_allocator()-
os2.walkers -
etc, a LOT of places inside the
os2lib.
-
-
-
os.args-
Uses it implicitly.
-
This is fixed by using
os2, which still uses a heap allocator implicitly, but at least is not thecontext.allocator, but theos2.heap_allocator.-
It's technically the same thing, but at least this doesn't break
-default-to-panic-allocator.
-
-
For
context.temp_allocator
-
Conclusion :
-
context.temp_allocator/runtime.DEFAULT_TEMP_ALLOCATOR_TEMP_GUARDis used implicitly A LOT inside thecorelibraries.
-
-
base:-
Nothing uses it. Just definition.
-
-
core:-
compress/common-
Has todo s to remove it.
-
-
encoding/json-
Uses implicitly.
-
-
encoding/xml-
Uses implicitly.
-
-
flags-
Uses implicitly.
-
-
fmt-
Uses implicitly.
-
-
image/jpeg-
Uses implicitly.
-
-
image/netbpm.-
Uses implicitly with guard.
-
-
image/png.-
Uses implicitly with guard.
-
-
net-
Uses implicitly.
-
-
odin/parser-
Uses implicitly.
-
-
os-
Uses implicitly with guard.
-
-
os/os2-
Uses implicitly with guard.
-
-
path/filepath-
Uses implicitly with guard.
-
-
path/slashpath-
Uses implicitly with guard.
-
-
sys/windows-
Uses implicitly.
-
-
sys/darwin-
Uses implicitly.
-
-
sys/info-
Uses implicitly.
-
-
sys/orca-
Uses implicitly.
-
-
testing-
Uses implicitly.
-
-
encoding/cbor-
It's overridable in the parameters.
-
cbor/tags.odin, wtf?-
I'm seeing
deletewithcontext.temp_allocator...
-
-
The library is really messy.
-
-
container.-
It's overridable in the parameters.
-
-
container/kmac-
It's overridable in the parameters.
-
-
dynlib-
It's overridable in the parameters.
-
-
thread-
Deletes the
context.temp_allocatorif set.
-
-
Default Allocators
-
For
context.allocator:when ODIN_DEFAULT_TO_NIL_ALLOCATOR { default_allocator_proc :: nil_allocator_proc default_allocator :: nil_allocator } else when ODIN_DEFAULT_TO_PANIC_ALLOCATOR { default_allocator_proc :: panic_allocator_proc default_allocator :: panic_allocator } else when ODIN_OS != .Orca && (ODIN_ARCH == .wasm32 || ODIN_ARCH == .wasm64p32) { default_allocator :: default_wasm_allocator default_allocator_proc :: wasm_allocator_proc } else { default_allocator :: heap_allocator default_allocator_proc :: heap_allocator_proc } -
For
context.temp_allocator:when NO_DEFAULT_TEMP_ALLOCATOR { default_temp_allocator_proc :: nil_allocator_proc } else { default_temp_allocator_proc :: proc(allocator_data: rawptr, mode: Allocator_Mode, size, alignment: int, old_memory: rawptr, old_size: int, loc := #caller_location) -> (data: []byte, err: Allocator_Error) { s := (^Default_Temp_Allocator)(allocator_data) return arena_allocator_proc(&s.arena, mode, size, alignment, old_memory, old_size, loc) } } -
Both are used here:
__init_context :: proc "contextless" (c: ^Context) {
// etc
c.allocator.procedure = default_allocator_proc
c.allocator.data = nil
c.temp_allocator.procedure = default_temp_allocator_proc
when !NO_DEFAULT_TEMP_ALLOCATOR {
c.temp_allocator.data = &global_default_temp_allocator_data
}
// etc
}
Nil Allocator
-
The
nilallocator returnsnilon every allocation attempt. This type of allocator can be used in scenarios where memory doesn't need to be allocated, but an attempt to allocate memory is not an error.
@(require_results)
nil_allocator :: proc() -> Allocator {
return Allocator{
procedure = nil_allocator_proc,
data = nil,
}
}
nil_allocator_proc :: proc(
allocator_data: rawptr,
mode: Allocator_Mode,
size, alignment: int,
old_memory: rawptr,
old_size: int,
loc := #caller_location,
) -> ([]byte, Allocator_Error) {
return nil, nil
}
Default to Nil
-
Use
-default-to-nil-allocatoras a compilation flag. -
Keep in mind:
-default-to-panic-allocatorcannot be used with-default-to-nil-allocator.
Panic Allocator
-
The panic allocator is a type of allocator that panics on any allocation attempt. This type of allocator can be used in scenarios where memory should not be allocated, and an attempt to allocate memory is an error.
// basically the same as the Nil Allocator, but panics.
Uses
-
To ensure explicit allocators, different from
context.allocator:-
You could set
context.allocatorto aruntime.panic_allocator()so that if anything uses it by accident it'll panic, then pass your allocator around explicitly.
-
Default to Panic
-
Use
-default-to-panic-allocatoras a compilation flag. -
Keep in mind:
-default-to-panic-allocatorcannot be used with-default-to-nil-allocator.
Arena: Backed directly by virtual memory (
vmem.Arena
)
-
Reserving virtual memory does not increase memory usage. It goes up when the dynamic array actually grows into that reserved space.
-
Uses virtual memory directly , whereas the arenas in mem use a
[]byteor[dynamic]bytefor their memory, so they basically still exist inside the heap allocator.
// Create an `Allocator` from the provided `Arena`
@(require_results, no_sanitize_address)
arena_allocator :: proc(arena: ^Arena) -> mem.Allocator {
return mem.Allocator{arena_allocator_proc, arena}
}
kind
.Static
-
Contains a single
Memory_Blockallocated with virtual memory.
kind
.Growing
-
Is a linked list of
Memory_Blocks allocated with virtual memory. -
Allows for
vmem.Arena_Tempwhich can callvmem.arena_growing_free_last_memory_block, shrinking itself, from my understanding.
kind
.Buffer
.Buffer
-
I'm not using this one, seems redundant. Just use
mem.Arena. -
Demo :
-
.
-
-
Discussion :
-
Caio:
-
Is the arena buffer from
mem/virtualactually virtual? I'm confused as the buffer is externally passed toarena_init_buffer, and for what I was able to understand, the memory is never committed. -
I mean, isn't a
mem.Arenamore efficient, as it avoids unnecessary checks for something that will never be committed? They both seem to do the same thing, whilemem/virtualbuffer uses the concept ofMemory_Blocksas an abstraction, but it doesn't seem to matter in this case
-
-
Barinzaya:
-
bufferis one mode, but in that one you provide the memory. The other modes (the defaultgrowingas well asstatic) do their own allocation, indeed using virtual memory. -
I guess it's just a matter of flexibility. It already has a mode to check anyway, and a lot of the logic is the same, so I guess it's a "might as well"--though I do find
virtual.Arenato be trying to do a bit too much myself -
In the bigger picture, using the same code for both could prove beneficial in terms of instruction cache, even if the code is less specialized
-
If you're actually using it in both modes, that is.
-
-
Bootstrapping
// Ability to bootstrap allocate a struct with an arena within the struct itself using the growing variant strategy.
arena_growing_bootstrap_new :: proc{
arena_growing_bootstrap_new_by_offset,
arena_growing_bootstrap_new_by_name,
}
// Ability to bootstrap allocate a struct with an arena within the struct itself using the static variant strategy.
arena_static_bootstrap_new :: proc{
arena_static_bootstrap_new_by_offset,
arena_static_bootstrap_new_by_name,
}
Alloc from Memory Block
-
Allocates memory from the provided arena.
@(require_results, no_sanitize_address, private)
arena_alloc_unguarded :: proc(arena: ^Arena, size: uint, alignment: uint, loc := #caller_location) -> (data: []byte, err: Allocator_Error) {
size := size
if size == 0 {
return nil, nil
}
switch arena.kind {
case .Growing:
prev_used := 0 if arena.curr_block == nil else arena.curr_block.used
data, err = alloc_from_memory_block(arena.curr_block, size, alignment, default_commit_size=arena.default_commit_size)
if err == .Out_Of_Memory {
if arena.minimum_block_size == 0 {
arena.minimum_block_size = DEFAULT_ARENA_GROWING_MINIMUM_BLOCK_SIZE
arena.minimum_block_size = mem.align_forward_uint(arena.minimum_block_size, DEFAULT_PAGE_SIZE)
}
if arena.default_commit_size == 0 {
arena.default_commit_size = min(DEFAULT_ARENA_GROWING_COMMIT_SIZE, arena.minimum_block_size)
arena.default_commit_size = mem.align_forward_uint(arena.default_commit_size, DEFAULT_PAGE_SIZE)
}
if arena.default_commit_size != 0 {
arena.default_commit_size, arena.minimum_block_size =
min(arena.default_commit_size, arena.minimum_block_size),
max(arena.default_commit_size, arena.minimum_block_size)
}
needed := mem.align_forward_uint(size, alignment)
needed = max(needed, arena.default_commit_size)
block_size := max(needed, arena.minimum_block_size)
new_block := memory_block_alloc(needed, block_size, alignment, {}) or_return
new_block.prev = arena.curr_block
arena.curr_block = new_block
arena.total_reserved += new_block.reserved
prev_used = 0
data, err = alloc_from_memory_block(arena.curr_block, size, alignment, default_commit_size=arena.default_commit_size)
}
arena.total_used += arena.curr_block.used - prev_used
case .Static:
if arena.curr_block == nil {
if arena.minimum_block_size == 0 {
arena.minimum_block_size = DEFAULT_ARENA_STATIC_RESERVE_SIZE
}
arena_init_static(arena, reserved=arena.minimum_block_size, commit_size=DEFAULT_ARENA_STATIC_COMMIT_SIZE) or_return
}
if arena.curr_block == nil {
return nil, .Out_Of_Memory
}
data, err = alloc_from_memory_block(arena.curr_block, size, alignment, default_commit_size=arena.default_commit_size)
arena.total_used = arena.curr_block.used
case .Buffer:
if arena.curr_block == nil {
return nil, .Out_Of_Memory
}
data, err = alloc_from_memory_block(arena.curr_block, size, alignment, default_commit_size=0)
arena.total_used = arena.curr_block.used
}
// sanitizer.address_unpoison(data)
return
}
@(require_results, no_sanitize_address)
alloc_from_memory_block :: proc(block: ^Memory_Block, min_size, alignment: uint, default_commit_size: uint = 0) -> (data: []byte, err: Allocator_Error) {
@(no_sanitize_address)
calc_alignment_offset :: proc "contextless" (block: ^Memory_Block, alignment: uintptr) -> uint {
alignment_offset := uint(0)
ptr := uintptr(block.base[block.used:])
mask := alignment-1
if ptr & mask != 0 {
alignment_offset = uint(alignment - (ptr & mask))
}
return alignment_offset
}
@(no_sanitize_address)
do_commit_if_necessary :: proc(block: ^Memory_Block, size: uint, default_commit_size: uint) -> (err: Allocator_Error) {
if block.committed - block.used < size {
pmblock := (^Platform_Memory_Block)(block)
base_offset := uint(uintptr(pmblock.block.base) - uintptr(pmblock))
// NOTE(bill): [Heuristic] grow the commit size larger than needed
// TODO(bill): determine a better heuristic for this behaviour
extra_size := max(size, block.committed>>1)
platform_total_commit := base_offset + block.used + extra_size
platform_total_commit = align_formula(platform_total_commit, DEFAULT_PAGE_SIZE)
platform_total_commit = min(max(platform_total_commit, default_commit_size), pmblock.reserved)
assert(pmblock.committed <= pmblock.reserved)
assert(pmblock.committed < platform_total_commit)
platform_memory_commit(pmblock, platform_total_commit) or_return
pmblock.committed = platform_total_commit
block.committed = pmblock.committed - base_offset
}
return
}
if block == nil {
return nil, .Out_Of_Memory
}
alignment_offset := calc_alignment_offset(block, uintptr(alignment))
size, size_ok := safe_add(min_size, alignment_offset)
if !size_ok {
err = .Out_Of_Memory
return
}
if to_be_used, ok := safe_add(block.used, size); !ok || to_be_used > block.reserved {
err = .Out_Of_Memory
return
}
assert(block.committed <= block.reserved)
do_commit_if_necessary(block, size, default_commit_size) or_return
data = block.base[block.used+alignment_offset:][:min_size]
block.used += size
// sanitizer.address_unpoison(data)
return
}
@(require_results, no_sanitize_address)
arena_alloc :: proc(arena: ^Arena, size: uint, alignment: uint, loc := #caller_location) -> (data: []byte, err: Allocator_Error) {
assert(alignment & (alignment-1) == 0, "non-power of two alignment", loc)
size := size
if size == 0 {
return nil, nil
}
sync.mutex_guard(&arena.mutex)
return arena_alloc_unguarded(arena, size, alignment, loc)
}
@(no_sanitize_address)
arena_allocator_proc :: proc(allocator_data: rawptr, mode: mem.Allocator_Mode,
size, alignment: int,
old_memory: rawptr, old_size: int,
location := #caller_location) -> (data: []byte, err: Allocator_Error) {
switch mode {
case .Resize, .Resize_Non_Zeroed:
// etc
_ = alloc_from_memory_block(block, new_end - old_end, 1, default_commit_size=arena.default_commit_size) or_return
// etc
new_memory := arena_alloc_unguarded(arena, size, alignment, location) or_return
}
return
}
Memory Block Alloc
// Linux
_commit :: proc "contextless" (data: rawptr, size: uint) -> Allocator_Error {
errno := linux.mprotect(data, size, {.READ, .WRITE})
if errno == .EINVAL {
return .Invalid_Pointer
} else if errno == .ENOMEM {
return .Out_Of_Memory
}
return nil
}
// Windows
@(no_sanitize_address)
_commit :: proc "contextless" (data: rawptr, size: uint) -> Allocator_Error {
result := VirtualAlloc(data, size, MEM_COMMIT, PAGE_READWRITE)
if result == nil {
switch err := GetLastError(); err {
case 0:
return .Invalid_Argument
case ERROR_INVALID_ADDRESS, ERROR_COMMITMENT_LIMIT:
return .Out_Of_Memory
}
return .Out_Of_Memory
}
return nil
}
@(no_sanitize_address)
commit :: proc "contextless" (data: rawptr, size: uint) -> Allocator_Error {
// sanitizer.address_unpoison(data, size)
return _commit(data, size)
}
// Linux
_reserve :: proc "contextless" (size: uint) -> (data: []byte, err: Allocator_Error) {
addr, errno := linux.mmap(0, size, {}, {.PRIVATE, .ANONYMOUS})
if errno == .ENOMEM {
return nil, .Out_Of_Memory
} else if errno == .EINVAL {
return nil, .Invalid_Argument
}
return (cast([^]byte)addr)[:size], nil
}
// Windows
@(no_sanitize_address)
_reserve :: proc "contextless" (size: uint) -> (data: []byte, err: Allocator_Error) {
result := VirtualAlloc(nil, size, MEM_RESERVE, PAGE_READWRITE)
if result == nil {
err = .Out_Of_Memory
return
}
data = ([^]byte)(result)[:size]
return
}
@(require_results, no_sanitize_address)
reserve :: proc "contextless" (size: uint) -> (data: []byte, err: Allocator_Error) {
return _reserve(size)
}
@(no_sanitize_address)
platform_memory_alloc :: proc "contextless" (to_commit, to_reserve: uint) -> (block: ^Platform_Memory_Block, err: Allocator_Error) {
to_commit, to_reserve := to_commit, to_reserve
to_reserve = max(to_commit, to_reserve)
total_to_reserved := max(to_reserve, size_of(Platform_Memory_Block))
to_commit = clamp(to_commit, size_of(Platform_Memory_Block), total_to_reserved)
data := reserve(total_to_reserved) or_return
commit_err := commit(raw_data(data), to_commit)
assert_contextless(commit_err == nil)
block = (^Platform_Memory_Block)(raw_data(data))
block.committed = to_commit
block.reserved = to_reserve
return
}
@(require_results, no_sanitize_address)
memory_block_alloc :: proc(committed, reserved: uint, alignment: uint = 0, flags: Memory_Block_Flags = {}) -> (block: ^Memory_Block, err: Allocator_Error) {
page_size := DEFAULT_PAGE_SIZE
assert(mem.is_power_of_two(uintptr(page_size)))
committed := committed
reserved := reserved
committed = align_formula(committed, page_size)
reserved = align_formula(reserved, page_size)
committed = clamp(committed, 0, reserved)
total_size := reserved + alignment + size_of(Platform_Memory_Block)
base_offset := mem.align_forward_uintptr(size_of(Platform_Memory_Block), max(uintptr(alignment), align_of(Platform_Memory_Block)))
protect_offset := uintptr(0)
do_protection := false
if .Overflow_Protection in flags { // overflow protection
rounded_size := reserved
total_size = uint(rounded_size + 2*page_size)
base_offset = uintptr(page_size + rounded_size - uint(reserved))
protect_offset = uintptr(page_size + rounded_size)
do_protection = true
}
pmblock := platform_memory_alloc(0, total_size) or_return
pmblock.block.base = ([^]byte)(pmblock)[base_offset:]
platform_memory_commit(pmblock, uint(base_offset) + committed) or_return
// Should be zeroed
assert(pmblock.block.used == 0)
assert(pmblock.block.prev == nil)
if do_protection {
protect(([^]byte)(pmblock)[protect_offset:], page_size, Protect_No_Access)
}
pmblock.block.committed = committed
pmblock.block.reserved = reserved
return &pmblock.block, nil
}
@(require_results, no_sanitize_address)
arena_init_growing :: proc(arena: ^Arena, reserved: uint = DEFAULT_ARENA_GROWING_MINIMUM_BLOCK_SIZE) -> (err: Allocator_Error) {
arena.kind = .Growing
arena.curr_block = memory_block_alloc(0, reserved, {}) or_return
arena.total_used = 0
arena.total_reserved = arena.curr_block.reserved
if arena.minimum_block_size == 0 {
arena.minimum_block_size = reserved
}
// sanitizer.address_poison(arena.curr_block.base[:arena.curr_block.committed])
return
}
@(require_results, no_sanitize_address)
arena_init_static :: proc(arena: ^Arena, reserved: uint = DEFAULT_ARENA_STATIC_RESERVE_SIZE, commit_size: uint = DEFAULT_ARENA_STATIC_COMMIT_SIZE) -> (err: Allocator_Error) {
arena.kind = .Static
arena.curr_block = memory_block_alloc(commit_size, reserved, {}) or_return
arena.total_used = 0
arena.total_reserved = arena.curr_block.reserved
// sanitizer.address_poison(arena.curr_block.base[:arena.curr_block.committed])
return
}
Memory Block Dealloc
// Windows (this one seems odd)
@(no_sanitize_address)
_release :: proc "contextless" (data: rawptr, size: uint) {
VirtualFree(data, 0, MEM_RELEASE)
}
// Linux
_release :: proc "contextless" (data: rawptr, size: uint) {
_ = linux.munmap(data, size)
}
@(no_sanitize_address)
release :: proc "contextless" (data: rawptr, size: uint) {
// sanitizer.address_unpoison(data, size)
_release(data, size)
}
@(no_sanitize_address)
platform_memory_free :: proc "contextless" (block: ^Platform_Memory_Block) {
if block != nil {
release(block, block.reserved)
}
}
@(no_sanitize_address)
memory_block_dealloc :: proc(block_to_free: ^Memory_Block) {
if block := (^Platform_Memory_Block)(block_to_free); block != nil {
platform_memory_free(block)
}
}
-
For Growing arenas :
-
vmem.arena_free_all()-
Will shrink the arena to the size of the first Memory Block.
-
Confirmed : This is also shown in the Task Manager, as having much less memory when freeing all.
-
Deallocates all but the first memory block of the arena and resets the allocator's usage to 0.
@(no_sanitize_address) arena_free_all :: proc(arena: ^Arena, loc := #caller_location) { switch arena.kind { case .Growing: sync.mutex_guard(&arena.mutex) // NOTE(bill): Free all but the first memory block (if it exists) for arena.curr_block != nil && arena.curr_block.prev != nil { arena_growing_free_last_memory_block(arena, loc) } // Zero the first block's memory if arena.curr_block != nil { curr_block_used := int(arena.curr_block.used) arena.curr_block.used = 0 // sanitizer.address_unpoison(arena.curr_block.base[:curr_block_used]) mem.zero(arena.curr_block.base, curr_block_used) // sanitizer.address_poison(arena.curr_block.base[:arena.curr_block.committed]) } arena.total_used = 0 case .Static, .Buffer: arena_static_reset_to(arena, 0) } arena.total_used = 0 } -
-
Allocator Procedure
// The allocator procedure used by an `Allocator` produced by `arena_allocator`
@(no_sanitize_address)
arena_allocator_proc :: proc(allocator_data: rawptr, mode: mem.Allocator_Mode,
size, alignment: int,
old_memory: rawptr, old_size: int,
location := #caller_location) -> (data: []byte, err: Allocator_Error) {
arena := (^Arena)(allocator_data)
size, alignment := uint(size), uint(alignment)
old_size := uint(old_size)
switch mode {
case .Alloc, .Alloc_Non_Zeroed:
return arena_alloc(arena, size, alignment, location)
case .Free:
err = .Mode_Not_Implemented
case .Free_All:
arena_free_all(arena, location)
case .Resize, .Resize_Non_Zeroed:
old_data := ([^]byte)(old_memory)
switch {
case old_data == nil:
return arena_alloc(arena, size, alignment, location)
case size == old_size:
// return old memory
data = old_data[:size]
return
case size == 0:
err = .Mode_Not_Implemented
return
}
sync.mutex_guard(&arena.mutex)
if uintptr(old_data) & uintptr(alignment-1) == 0 {
if size < old_size {
// shrink data in-place
data = old_data[:size]
// sanitizer.address_poison(old_data[size:old_size])
return
}
if block := arena.curr_block; block != nil {
start := uint(uintptr(old_memory)) - uint(uintptr(block.base))
old_end := start + old_size
new_end := start + size
if start < old_end && old_end == block.used && new_end <= block.reserved {
// grow data in-place, adjusting next allocation
prev_used := block.used
_ = alloc_from_memory_block(block, new_end - old_end, 1, default_commit_size=arena.default_commit_size) or_return
arena.total_used += block.used - prev_used
data = block.base[start:new_end]
// sanitizer.address_unpoison(data)
return
}
}
}
new_memory := arena_alloc_unguarded(arena, size, alignment, location) or_return
if new_memory == nil {
return
}
copy(new_memory, old_data[:old_size])
// sanitizer.address_poison(old_data[:old_size])
return new_memory, nil
case .Query_Features:
set := (^mem.Allocator_Mode_Set)(old_memory)
if set != nil {
set^ = {.Alloc, .Alloc_Non_Zeroed, .Free_All, .Resize, .Query_Features}
}
case .Query_Info:
err = .Mode_Not_Implemented
}
return
}
Rollback the offset from
vmem.Arena -> .Static
with
vmem.arena_static_reset_to
-
Unlike other "rollback arena options", there's no helper with that, but the following procedure can be used:
-
Resets the memory of a Static or Buffer arena to a specific
position(offset) and zeroes the previously used memory. -
It doesn't have a begin , end , or guard ; the offset need to be defined by the user without any helpers.
-
It doesn't "free" the memory, etc.
@(no_sanitize_address) arena_static_reset_to :: proc(arena: ^Arena, pos: uint, loc := #caller_location) -> bool { sync.mutex_guard(&arena.mutex) if arena.curr_block != nil { assert(arena.kind != .Growing, "expected a non .Growing arena", loc) prev_pos := arena.curr_block.used arena.curr_block.used = clamp(pos, 0, arena.curr_block.reserved) if prev_pos > pos { mem.zero_slice(arena.curr_block.base[arena.curr_block.used:][:prev_pos-pos]) } arena.total_used = arena.curr_block.used // sanitizer.address_poison(arena.curr_block.base[:arena.curr_block.committed]) return true } else if pos == 0 { arena.total_used = 0 return true } return false } -
Free last Memory Block from
vmem.Arena -> .Growing
with
vmem.Arena_Temp
-
Is a way to produce temporary watermarks to reset an arena to a previous state.
-
All uses of an
Arena_Tempmust be handled by ending them witharena_temp_endor ignoring them witharena_temp_ignore.
Arena :: struct {
kind: Arena_Kind,
curr_block: ^Memory_Block,
total_used: uint,
total_reserved: uint,
default_commit_size: uint, // commit size <= reservation size
minimum_block_size: uint, // block size == total reservation
temp_count: uint,
mutex: sync.Mutex,
}
Memory_Block :: struct {
prev: ^Memory_Block,
base: [^]byte,
used: uint,
committed: uint,
reserved: uint,
}
Arena_Temp :: struct {
arena: ^Arena,
block: ^Memory_Block,
used: uint,
}
Usage
-
Begin :
@(require_results, no_sanitize_address) arena_temp_begin :: proc(arena: ^Arena, loc := #caller_location) -> (temp: Arena_Temp) { assert(arena != nil, "nil arena", loc) sync.mutex_guard(&arena.mutex) temp.arena = arena temp.block = arena.curr_block if arena.curr_block != nil { temp.used = arena.curr_block.used } arena.temp_count += 1 return } -
End :
@(no_sanitize_address) arena_growing_free_last_memory_block :: proc(arena: ^Arena, loc := #caller_location) { if free_block := arena.curr_block; free_block != nil { assert(arena.kind == .Growing, "expected a .Growing arena", loc) arena.total_used -= free_block.used arena.total_reserved -= free_block.reserved arena.curr_block = free_block.prev // sanitizer.address_poison(free_block.base[:free_block.committed]) memory_block_dealloc(free_block) } } @(no_sanitize_address) arena_temp_end :: proc(temp: Arena_Temp, loc := #caller_location) { assert(temp.arena != nil, "nil arena", loc) arena := temp.arena sync.mutex_guard(&arena.mutex) if temp.block != nil { memory_block_found := false for block := arena.curr_block; block != nil; block = block.prev { if block == temp.block { memory_block_found = true break } } if !memory_block_found { assert(arena.curr_block == temp.block, "memory block stored within Arena_Temp not owned by Arena", loc) } for arena.curr_block != temp.block { arena_growing_free_last_memory_block(arena) } if block := arena.curr_block; block != nil { assert(block.used >= temp.used, "out of order use of arena_temp_end", loc) amount_to_zero := block.used-temp.used mem.zero_slice(block.base[temp.used:][:amount_to_zero]) block.used = temp.used arena.total_used -= amount_to_zero } } assert(arena.temp_count > 0, "double-use of arena_temp_end", loc) arena.temp_count -= 1 } -
Guard :
-
I didn't find any guard implementations for this one.
-
-
Ignore :
@(no_sanitize_address) arena_temp_ignore :: proc(temp: Arena_Temp, loc := #caller_location) { assert(temp.arena != nil, "nil arena", loc) arena := temp.arena sync.mutex_guard(&arena.mutex) assert(arena.temp_count > 0, "double-use of arena_temp_end", loc) arena.temp_count -= 1 } -
Check :
-
Asserts that all uses of
Arena_Temphas been used by anArena
@(no_sanitize_address) arena_check_temp :: proc(arena: ^Arena, loc := #caller_location) { assert(arena.temp_count == 0, "Arena_Temp not been ended", loc) } -
Arena: Backed buffer as an arena (
mem.Arena
)
-
All those names are interchangeable.
-
It's an allocator that uses a single backing buffer for allocations.
-
The buffer is used contiguously, from start to end. Each subsequent allocation occupies the next adjacent region of memory in the buffer. Since the arena allocator does not keep track of any metadata associated with the allocations and their locations, it is impossible to free individual allocations.
-
The arena allocator can be used for temporary allocations in frame-based memory management. Games are one example of such applications. A global arena can be used for any temporary memory allocations, and at the end of each frame all temporary allocations are freed. Since no temporary object is going to live longer than a frame, no lifetimes are violated.
-
The arena’s logic only requires an offset (or pointer) to indicate the end of the last allocation.
-
To allocate some memory from the arena, it is as simple as moving the offset (or pointer) forward. In Big-O notation, the allocation has complexity of O(1) (constant).
-
On arenas being slices, it's important to realize that what they are is an implementation. All the abstract idea is, is to allocate linearly from a buffer such that you can quickly free everything. Whether it's a single buffer and cannot grow at all depends entirely on the arena allocator implementation in question.
-
You cannot deallocate memory individually in an arena allocator.
-
freefor pointers created using an arena does not work.-
Returns the error
Mode_Not_Implemented.
-
-
The correct approach is to use
deleteon the entire arena.
-
-
Problems of using Arena Allocators for arrays with changing capacity - Karl Zylinski .
-
Article .
-
Shows problems with using
make([dynamic]int, arena_alloc).-
"Trail of dead stuff, for every resize".
-
.
-
Virtual Arenas doesn't always have this problem, as there's a special condition to avoid this, but it doesn't solve for every case.
-
-
~ Arena Allocators - Ryan Fleury .
-
It introduces DOD and tries to justify to the students how RAII can be really bad, etc.
-
When it comes to the arena, tho, I didn't really love the explanation. The arena could be really simple, but I felt like his examples went to a specific direction that could be simplified.
-
Most of the talk is: DOD -> A specific implementation of Arena.
-
Article .
-
Arena :: struct {
data: []byte,
offset: int,
peak_used: int,
temp_count: int,
}
@(require_results)
arena_allocator :: proc(arena: ^Arena) -> Allocator {
return Allocator{
procedure = arena_allocator_proc,
data = arena, // The DATA is the arena.
}
}
Rationale
-
The simplest arena allocator could look like this:
static unsigned char *arena_buffer;
static size_t arena_buffer_length;
static size_t arena_offset;
void *arena_alloc(size_t size) {
// Check to see if the backing memory has space left
if (arena_offset+size <= arena_buffer_length) {
void *ptr = &arena_buffer[arena_offset];
arena_offset += size;
// Zero new memory by default
memset(ptr, 0, size);
return ptr;
}
// Return NULL if the arena is out of memory
return NULL;
}
-
There are two issues with this basic approach:
-
You cannot reuse this procedure for different arenas
-
Can be easily solved by coupling that global data into a structure and passing that to the procedure
arena_alloc.
-
-
The pointer returned may not be aligned correctly for the data you need.
-
This requires understanding the basic issues of unaligned memory .
-
-
-
It's also missing some important features of a practical implementation:
-
init,alloc,free,resize,free_all.
-
Initialize an arena
-
Initializes the arena
awith memory regiondataas its backing buffer.
arena_init :: proc(a: ^Arena, data: []byte) {
a.data = data
a.offset = 0
a.peak_used = 0
a.temp_count = 0
// sanitizer.address_poison(a.data)
}
Allocator Procedure
arena_allocator_proc :: proc(
allocator_data: rawptr,
mode: Allocator_Mode,
size: int,
alignment: int,
old_memory: rawptr,
old_size: int,
loc := #caller_location,
) -> ([]byte, Allocator_Error) {
arena := cast(^Arena)allocator_data
switch mode {
case .Alloc:
return arena_alloc_bytes(arena, size, alignment, loc)
case .Alloc_Non_Zeroed:
return arena_alloc_bytes_non_zeroed(arena, size, alignment, loc)
case .Free:
return nil, .Mode_Not_Implemented
case .Free_All:
arena_free_all(arena)
case .Resize:
return default_resize_bytes_align(byte_slice(old_memory, old_size), size, alignment, arena_allocator(arena), loc)
case .Resize_Non_Zeroed:
return default_resize_bytes_align_non_zeroed(byte_slice(old_memory, old_size), size, alignment, arena_allocator(arena), loc)
case .Query_Features:
set := (^Allocator_Mode_Set)(old_memory)
if set != nil {
set^ = {.Alloc, .Alloc_Non_Zeroed, .Free_All, .Resize, .Resize_Non_Zeroed, .Query_Features}
}
return nil, nil
case .Query_Info:
return nil, .Mode_Not_Implemented
}
return nil, nil
}
Allocate
-
All allocation procedures call this one:
-
Allocate non-initialized memory from an arena.
-
This procedure allocates
sizebytes of memory aligned on a boundary specified byalignmentfrom an arenaa. -
The allocated memory is not explicitly zero-initialized. This procedure returns a slice of the newly allocated memory region.
-
It creates a byte slice by using a pointer and a length. The pointer is within the region of the arena.
@(require_results)
arena_alloc_bytes_non_zeroed :: proc(
a: ^Arena,
size: int,
alignment := DEFAULT_ALIGNMENT,
loc := #caller_location
) -> ([]byte, Allocator_Error) {
if a.data == nil {
panic("Allocation on uninitialized Arena allocator.", loc)
}
#no_bounds_check end := &a.data[a.offset]
ptr := align_forward(end, uintptr(alignment))
total_size := size + ptr_sub((^byte)(ptr), (^byte)(end))
if a.offset + total_size > len(a.data) {
return nil, .Out_Of_Memory
}
a.offset += total_size
a.peak_used = max(a.peak_used, a.offset)
result := byte_slice(ptr, size)
// ensure_poisoned(result)
// sanitizer.address_unpoison(result)
return result, nil
}
Free All
-
Free all memory back to the arena allocator.
arena_free_all :: proc(a: ^Arena) {
a.offset = 0
// sanitizer.address_poison(a.data)
}
Rollback the offset from
mem.Arena
with:
mem.Arena_Temp_Memory
-
Temporary memory region of an
Arenaallocator. -
Temporary memory regions of an arena act as "save-points" for the allocator.
-
When one is created, the subsequent allocations are done inside the temporary memory region.
-
When
end_arena_temp_memoryis called, the arena is rolled back, and all of the memory that was allocated from the arena will be freed. -
Multiple temporary memory regions can exist at the same time for an arena.
Arena_Temp_Memory :: struct {
arena: ^Arena,
prev_offset: int,
}
Usage
-
Begin :
-
Creates a temporary memory region. After a temporary memory region is created, all allocations are said to be inside the temporary memory region, until
end_arena_temp_memoryis called.
@(require_results) begin_arena_temp_memory :: proc(a: ^Arena) -> Arena_Temp_Memory { tmp: Arena_Temp_Memory tmp.arena = a tmp.prev_offset = a.offset a.temp_count += 1 return tmp } -
-
End :
-
Ends the temporary memory region for an arena. All of the allocations inside the temporary memory region will be freed to the arena.
end_arena_temp_memory :: proc(tmp: Arena_Temp_Memory) { assert(tmp.arena.offset >= tmp.prev_offset) assert(tmp.arena.temp_count > 0) // sanitizer.address_poison(tmp.arena.data[tmp.prev_offset:tmp.arena.offset]) tmp.arena.offset = tmp.prev_offset tmp.arena.temp_count -= 1 } -
-
Guard :
-
I didn't find any guard implementations for this one.
-
Arena: Growing
mem.Arena
(
mem.Dynamic_Arena
)
-
The dynamic arena allocator uses blocks of a specific size, allocated on-demand using the block allocator. This allocator acts similarly to
Arena. -
All allocations in a block happen contiguously, from start to end. If an allocation does not fit into the remaining space of the block and its size is smaller than the specified out-band size, a new block is allocated using the
block_allocatorand the allocation is performed from a newly-allocated block. -
If an allocation is larger than the specified out-band size, a new block is allocated such that the allocation fits into this new block. This is referred to as an out-band allocation . The out-band blocks are kept separately from normal blocks.
-
Just like
Arena, the dynamic arena does not support freeing of individual objects.
Dynamic_Arena :: struct {
block_size: int,
out_band_size: int,
alignment: int,
unused_blocks: [dynamic]rawptr,
used_blocks: [dynamic]rawptr,
out_band_allocations: [dynamic]rawptr,
current_block: rawptr,
current_pos: rawptr,
bytes_left: int,
block_allocator: Allocator,
}
Arena:
context.temp_allocator
(
runtime.Default_Temp_Allocator
)
-
Arenahere is aruntime.Arena-
This
Arenais a growing arena that is only used for the default temp allocator. -
"For your own growing arena needs, prefer
Arenafromcore:mem/virtual".
-
-
By default, every
Memory_Blockis backed by thecontext.allocator.
Arena :: struct {
backing_allocator: Allocator,
curr_block: ^Memory_Block,
total_used: uint,
total_capacity: uint,
minimum_block_size: uint,
temp_count: uint,
}
Memory_Block :: struct {
prev: ^Memory_Block,
allocator: Allocator,
base: [^]byte,
used: uint,
capacity: uint,
}
Default_Temp_Allocator :: struct {
arena: Arena,
}
@(require_results)
default_temp_allocator :: proc(allocator: ^Default_Temp_Allocator) -> Allocator {
return Allocator{
procedure = default_temp_allocator_proc,
data = allocator,
}
}
Default
context.temp_allocator
-
Default_Temp_Allocatoris anil_allocatorwhenNO_DEFAULT_TEMP_ALLOCATORistrue. -
context.temp_allocatoris typically called withfree_all(context.temp_allocator)once per "frame-loop" to prevent it from "leaking" memory. -
No Default :
NO_DEFAULT_TEMP_ALLOCATOR: bool : ODIN_OS == .Freestanding || ODIN_DEFAULT_TO_NIL_ALLOCATOR-
Consequence of calling
-default-to-nil-allocatoras a compiler flag.
-
Where is the memory actually stored
-
The
Memory_Blocksstruct and the reserved region from within thecontext.temp_allocatorare stored in itsarena.backing_allocator(usuallycontext.allocator). -
Analysis :
@(require_results) memory_block_alloc :: proc(allocator: Allocator, capacity: uint, alignment: uint, loc := #caller_location) -> (block: ^Memory_Block, err: Allocator_Error) { total_size := uint(capacity + max(alignment, size_of(Memory_Block))) // The total size of the data (`[]byte`) that will be used for `mem_alloc`. // It's the `base_offset + capacity`; in other words: `Memory_Block` struct + `block.base` region. base_offset := uintptr(max(alignment, size_of(Memory_Block))) // It's an offset from the data (`[]byte`) that will be allocated. // It represents the start of the `block.base`, which is the region the block uses to allocate new data when called `alloc_from_memory_block`. min_alignment: int = max(16, align_of(Memory_Block), int(alignment)) // I'm not completely sure, but it's only used in `mem_alloc`. data := mem_alloc(int(total_size), min_alignment, allocator, loc) or_return // A `[]byte` is alloc using the backing_allocator. block = (^Memory_Block)(raw_data(data)) // The pointer to this slice is used as the pointer to the block. // This means that the block metadata will be the first thing populating the `[]byte` allocated. end := uintptr(raw_data(data)[len(data):]) // Fancy way to get the pointer of the last element in the data (`[]byte`) region. block.allocator = allocator // The backing_allocator is saved as the `block.allocator` block.base = ([^]byte)(uintptr(block) + base_offset) // The `base´ will be right after the block struct end (considering a custom alignment from the procedure args). // It represents the start of the region the block uses to allocate new data when called `alloc_from_memory_block`. block.capacity = uint(end - uintptr(block.base)) // The size of the `block.base`. // Represents the allocation "capacity" of the `block.base`, which is how much memory the block can store. // Calculated by doing the pointer subtraction: `uintptr(end) - uintptr(block.base)`. return } -
What
arena.backing_allocatorshould be used?-
The
Memory_Blocksneeds to be able to be free individually, as this is the main strategy around thecontext.temp_allocator. -
In that sense, the
backing_allocatorshould be an allocator that implements.Free; this means thatmem.Arenais not good for this. -
Any allocator that implements
.Freeshould be enough, I believe.
-
-
So, what's stored "inside the
context.temp_allocator"?-
"Nothing".
-
I mean, the
context.temp_allocatoris aruntime.Arena, which is:
Arena :: struct { backing_allocator: Allocator, curr_block: ^Memory_Block, total_used: uint, total_capacity: uint, minimum_block_size: uint, temp_count: uint, }-
And it's stored inside the
context(which is on the stack), with its backing.databeing a pointer toglobal_default_temp_allocator_data, which is a global variable. -
So, the
context.temp_allocatoris just a struct on the stack; it doesn't store anything on the heap. Itsarena.backing_allocatoris what actually decides where the memory is stored.
-
Threading
-
Thread-safe?
-
Ginger Bill:
-
Within a thread, yes. Across? No.
-
It's a thread-local allocator.
-
-
-
See <a href="Odin#Context>Odin#Context" for information on how to handle the
context.temp_allocatorif a existing one is used or not. -
Practical example: CPU multithreaded texture loading :
-
How I handled the
context.temp_allocator:-
Each thread has a functional
context.temp_allocator, completely thread-local.
-
-
Storing the image data :
-
Using
context.temp_allocatorfrom the main thread :-
I was first using this while using a mutex in the
pixels = make([]byte, size, allocator)fromload_image_file, as thecontext.temp_allocatoris not thread-safe.-
If the allocator were a
vmem.Arena, this was not going to be necessary, as thevmem.Arenaalready has a mutex inside it, being thread-safe.
-
-
My main idea at first is that I would use the main thread's
context.temp_allocator, so the main thread can keep the data loaded from the other threads, as I need the main thread to be the one responsible for managing the loaded data's lifetime, to later can calltexture_copy_from_buffer(). -
Tho, later I realized that the
context.temp_allocatorfrom the main thread can not be used, as the main thread also participates in thejobs.try_execute_queued_job_globals(), which then provokes its owncontext.temp_allocatorto dofree_all()after one of its jobs is executed, breaking everything. -
If a guard is used instead of
free_all(), this fixes the freeing problem, but it would be very weird handling guard s when thecontext.temp_allocatoris being used in different threads; this is not a good option in this case.
-
-
Using a
vmem.Arenafrom the main thread :-
Much better. This arena has a mutex and it's already thread-safe.
-
There's no risk of freeing the data from this arena, as it's completely managed by the main thread and untouched by the Jobs System.
-
There's no direct participation of a
context.temp_allocatorfrom a different thread; it's much simpler. -
I'm now using a guard for the
context.temp_allocatorafter the job is executed; this ensures no incorrect data is deleted by accident by callingfree_all(); if this was not done, the main thread crashes after all the jobs are executed, as it lost some important data from the dispatcher scope.
-
-
-
Tracy interaction
-
free_all-
Is ok, as it's just calling the
allocator_procfrom inside itsbacking_allocator. -
If the
backing_allocatoris profiled, then it works perfectly fine. -
.Free_Allbecames.Freefor everyMemory_Block, followed by the remainingMemory_Blockbeing zeroed out.
-
Init
-
App initialization :
-
The first thing done before calling the entry point of the code, is:
// Unix example @(link_name="main", linkage="strong", require) main :: proc "c" (argc: i32, argv: [^]cstring) -> i32 { args__ = argv[:argc] context = default_context() #force_no_inline _startup_runtime() intrinsics.__entry_point() #force_no_inline _cleanup_runtime() return 0 }-
The
default_context()will internally call__init_context(), which internally assigns:
c.temp_allocator.procedure = default_temp_allocator_proc -
when !NO_DEFAULT_TEMP_ALLOCATOR {
c.temp_allocator.data = &global_default_temp_allocator_data
}
- The `global_default_temp_allocator_data` is defined at comp-time as:
odin
when !NO_DEFAULT_TEMP_ALLOCATOR {
when ODIN_ARCH == .i386 && ODIN_OS == .Windows {
// Thread-local storage is problematic on Windows i386
global_default_temp_allocator_data: Default_Temp_Allocator
} else {
@thread_local global_default_temp_allocator_data: Default_Temp_Allocator
}
}
```
- At this point, the
.data
doesn't have anything, besides an empty `runtime.Arena`.
-
Let the
context.temp_allocatorbe initialized automatically :-
When using the
context.temp_allocatorto alloc anything, this procedure will be called:
default_temp_allocator_proc :: proc(allocator_data: rawptr, mode: Allocator_Mode, size, alignment: int, old_memory: rawptr, old_size: int, loc := #caller_location) -> (data: []byte, err: Allocator_Error) { s := (^Default_Temp_Allocator)(allocator_data) return arena_allocator_proc(&s.arena, mode, size, alignment, old_memory, old_size, loc) }-
The
runtime.arena_allocator_procwill internally callruntime.arena_alloc. -
Finally, if no
backing_allocatorwas set for thecontext.temp_allocator, thedefault_allocator()will be used:
if arena.backing_allocator.procedure == nil { arena.backing_allocator = default_allocator() }-
The default size will be:
DEFAULT_TEMP_ALLOCATOR_BACKING_SIZE: int : #config(DEFAULT_TEMP_ALLOCATOR_BACKING_SIZE, 4 * Megabyte)-
The minimum size is
4 KiB; this is enforced by thearena_init. -
The
default_allocatoris theheap_allocatorif the conditions are met:
when ODIN_DEFAULT_TO_NIL_ALLOCATOR { default_allocator_proc :: nil_allocator_proc default_allocator :: nil_allocator } else when ODIN_DEFAULT_TO_PANIC_ALLOCATOR { default_allocator_proc :: panic_allocator_proc default_allocator :: panic_allocator } else when ODIN_OS != .Orca && (ODIN_ARCH == .wasm32 || ODIN_ARCH == .wasm64p32) { default_allocator :: default_wasm_allocator default_allocator_proc :: wasm_allocator_proc } else { default_allocator :: heap_allocator default_allocator_proc :: heap_allocator_proc } -
-
Manually initialize the
context.temp_allocator:-
Initializes the global temporary allocator used as the default
context.temp_allocator. -
This is ignored when
NO_DEFAULT_TEMP_ALLOCATORis true. -
"This procedure is not necessary to use the Arena as the default zero as
arena_allocwill set things up if necessary"; this means that if this is not called, thecontext.temp_allocatorwill be initialized automatically during its first allocation." -
As this is a builtin procedure, you can just call it as
init_global_temporary_allocator(..).
@(builtin, disabled=NO_DEFAULT_TEMP_ALLOCATOR) init_global_temporary_allocator :: proc(size: int, backup_allocator := context.allocator) { when !NO_DEFAULT_TEMP_ALLOCATOR { default_temp_allocator_init(&global_default_temp_allocator_data, size, backup_allocator) } }-
Internally, this will be called:
@(require_results) memory_block_alloc :: proc(allocator: Allocator, capacity: uint, alignment: uint, loc := #caller_location) -> (block: ^Memory_Block, err: Allocator_Error) { total_size := uint(capacity + max(alignment, size_of(Memory_Block))) base_offset := uintptr(max(alignment, size_of(Memory_Block))) min_alignment: int = max(16, align_of(Memory_Block), int(alignment)) data := mem_alloc(int(total_size), min_alignment, allocator, loc) or_return block = (^Memory_Block)(raw_data(data)) end := uintptr(raw_data(data)[len(data):]) block.allocator = allocator block.base = ([^]byte)(uintptr(block) + base_offset) block.capacity = uint(end - uintptr(block.base)) // sanitizer.address_poison(block.base, block.capacity) // Should be zeroed assert(block.used == 0) assert(block.prev == nil) return } // Initializes the arena with a usable block. @(require_results) arena_init :: proc(arena: ^Arena, size: uint, backing_allocator: Allocator, loc := #caller_location) -> Allocator_Error { arena^ = {} arena.backing_allocator = backing_allocator arena.minimum_block_size = max(size, 1<<12) // minimum block size of 4 KiB new_block := memory_block_alloc(arena.backing_allocator, arena.minimum_block_size, 0, loc) or_return arena.curr_block = new_block arena.total_capacity += new_block.capacity return nil } default_temp_allocator_init :: proc(s: ^Default_Temp_Allocator, size: int, backing_allocator := context.allocator) { _ = arena_init(&s.arena, uint(size), backing_allocator) } -
Deinit
-
Called automatically after the
mainprocedure ends (@(fini)).
arena_destroy :: proc "contextless" (arena: ^Arena, loc := #caller_location) {
for arena.curr_block != nil {
free_block := arena.curr_block
arena.curr_block = free_block.prev
arena.total_capacity -= free_block.capacity
memory_block_dealloc(free_block, loc)
}
arena.total_used = 0
arena.total_capacity = 0
}
default_temp_allocator_destroy :: proc "contextless" (s: ^Default_Temp_Allocator) {
if s != nil {
arena_destroy(&s.arena)
s^ = {}
}
}
@(fini, private)
_destroy_temp_allocator_fini :: proc "contextless" () {
default_temp_allocator_destroy(&global_default_temp_allocator_data)
}
Allocator Proc
default_temp_allocator_proc :: proc(allocator_data: rawptr, mode: Allocator_Mode,
size, alignment: int,
old_memory: rawptr, old_size: int, loc := #caller_location) -> (data: []byte, err: Allocator_Error) {
s := (^Default_Temp_Allocator)(allocator_data)
return arena_allocator_proc(&s.arena, mode, size, alignment, old_memory, old_size, loc)
}
arena_allocator_proc :: proc(allocator_data: rawptr, mode: Allocator_Mode,
size, alignment: int,
old_memory: rawptr, old_size: int,
location := #caller_location) -> (data: []byte, err: Allocator_Error) {
arena := (^Arena)(allocator_data)
size, alignment := uint(size), uint(alignment)
old_size := uint(old_size)
switch mode {
case .Alloc, .Alloc_Non_Zeroed:
return arena_alloc(arena, size, alignment, location)
case .Free:
err = .Mode_Not_Implemented
case .Free_All:
arena_free_all(arena, location)
case .Resize, .Resize_Non_Zeroed:
old_data := ([^]byte)(old_memory)
switch {
case old_data == nil:
return arena_alloc(arena, size, alignment, location)
case size == old_size:
// return old memory
data = old_data[:size]
return
case size == 0:
err = .Mode_Not_Implemented
return
case uintptr(old_data) & uintptr(alignment-1) == 0:
if size < old_size {
// shrink data in-place
data = old_data[:size]
return
}
if block := arena.curr_block; block != nil {
start := uint(uintptr(old_memory)) - uint(uintptr(block.base))
old_end := start + old_size
new_end := start + size
if start < old_end && old_end == block.used && new_end <= block.capacity {
// grow data in-place, adjusting next allocation
block.used = uint(new_end)
data = block.base[start:new_end]
// sanitizer.address_unpoison(data)
return
}
}
}
new_memory := arena_alloc(arena, size, alignment, location) or_return
if new_memory == nil {
return
}
copy(new_memory, old_data[:old_size])
return new_memory, nil
case .Query_Features:
set := (^Allocator_Mode_Set)(old_memory)
if set != nil {
set^ = {.Alloc, .Alloc_Non_Zeroed, .Free_All, .Resize, .Query_Features}
}
case .Query_Info:
err = .Mode_Not_Implemented
}
return
}
Free last Memory Block from
runtime.Arena
(
context.temp_allocator
) with
runtime.Arena_Temp
/ "Temp Allocator Temp" /
runtime.DEFAULT_TEMP_ALLOCATOR_TEMP_GUARD
-
Is a way to produce temporary watermarks to reset an arena to a previous state.
-
All uses of an
Arena_Tempmust be handled by ending them witharena_temp_endor ignoring them witharena_temp_ignore. -
Arenahere is aruntime.Arena-
This
Arenais a growing arena that is only used for the default temp allocator. -
"For your own growing arena needs, prefer
Arenafromcore:mem/virtual".
-
-
base:runtime -> default_temp_allocator_arena.odin
Arena :: struct {
backing_allocator: Allocator,
curr_block: ^Memory_Block,
total_used: uint,
total_capacity: uint,
minimum_block_size: uint,
temp_count: uint,
}
Memory_Block :: struct {
prev: ^Memory_Block,
allocator: Allocator,
base: [^]byte,
used: uint,
capacity: uint,
}
Arena_Temp :: struct {
arena: ^Arena,
block: ^Memory_Block,
used: uint,
}
Differences from the
mem.Arena_Temp
-
The
runtime.Arena_Temphas noMutex. -
The
runtime.Arena_Tempis made theruntime.Arena, which is a growing arena; it's not for static arenas. -
Etc, I think these are the main differences.
TLDR and FAQ: How the guard works
-
When exiting the scope:
-
It frees all the new memory blocks from the arena.
-
Any new things in the
temp.block(which is now thearena.curr_block) are zeroed. -
The "arena current position" is rolled back (
block.used).
-
-
Is it inefficient to use this guard everywhere? Where should I use this guard vs just using the
context.temp_allocatordirectly?-
The guard will not free any memory if there's no new block inside the arena, BUT, it will ensure the new memory created within the arena is zeroed and the "arena current position" is rolled back.
-
In that sense, even though it might have situations where nothing will be freed on the OS, the arena will have "more space", as new things can be allocated disregarding the space used in allocations inside the guard scope.
-
As a conclusion, it might not be that performance efficient to use the guard everywhere, but it reduces memory spikes. The more guards used, the more frequent the frees can be, reducing the memory spike, but approximating the allocator to a "general allocator" with
new/free. It's all about lifetimes. A good use of the guard is when placed where it prevents memory spikes and it's not frequent enough so it becomes inefficient.
-
Usage
base:runtime -> default_temp_allocator_arena.odin + default_temporary_allocator.odin
-
Begin :
@(require_results) arena_temp_begin :: proc(arena: ^Arena, loc := #caller_location) -> (temp: Arena_Temp) { assert(arena != nil, "nil arena", loc) temp.arena = arena temp.block = arena.curr_block if arena.curr_block != nil { temp.used = arena.curr_block.used } arena.temp_count += 1 return } @(require_results) default_temp_allocator_temp_begin :: proc(loc := #caller_location) -> (temp: Arena_Temp) { if context.temp_allocator.data == &global_default_temp_allocator_data { temp = arena_temp_begin(&global_default_temp_allocator_data.arena, loc) } return }-
The
runtime.Arenahas atemp_countto keep track to not used_endtwice in a row; if you just use the guard , then this shouldn't matter.
-
-
End :
mem_free :: #force_no_inline proc(ptr: rawptr, allocator := context.allocator, loc := #caller_location) -> Allocator_Error { if ptr == nil || allocator.procedure == nil { return nil } _, err := allocator.procedure(allocator.data, .Free, 0, 0, ptr, 0, loc) return err } memory_block_dealloc :: proc "contextless" (block_to_free: ^Memory_Block, loc := #caller_location) { if block_to_free != nil { allocator := block_to_free.allocator // sanitizer.address_unpoison(block_to_free.base, block_to_free.capacity) context = default_context() context.allocator = allocator mem_free(block_to_free, allocator, loc) } } arena_free_last_memory_block :: proc(arena: ^Arena, loc := #caller_location) { if free_block := arena.curr_block; free_block != nil { arena.curr_block = free_block.prev arena.total_capacity -= free_block.capacity memory_block_dealloc(free_block, loc) } } arena_temp_end :: proc(temp: Arena_Temp, loc := #caller_location) { if temp.arena == nil { assert(temp.block == nil) assert(temp.used == 0) return } arena := temp.arena if temp.block != nil { memory_block_found := false for block := arena.curr_block; block != nil; block = block.prev { if block == temp.block { memory_block_found = true break } } if !memory_block_found { assert(arena.curr_block == temp.block, "memory block stored within Arena_Temp not owned by Arena", loc) } for arena.curr_block != temp.block { arena_free_last_memory_block(arena) } if block := arena.curr_block; block != nil { assert(block.used >= temp.used, "out of order use of arena_temp_end", loc) amount_to_zero := block.used-temp.used intrinsics.mem_zero(block.base[temp.used:], amount_to_zero) // sanitizer.address_poison(block.base[temp.used:block.capacity]) block.used = temp.used arena.total_used -= amount_to_zero } } assert(arena.temp_count > 0, "double-use of arena_temp_end", loc) arena.temp_count -= 1 } default_temp_allocator_temp_end :: proc(temp: Arena_Temp, loc := #caller_location) { arena_temp_end(temp, loc) }-
The most important operations are:
-
Frees any new memory blocks from the
context.temp_allocator, comparing to the memory block stored onarena_temp_begin:for arena.curr_block != temp.block {
-
arena_free_last_memory_block(arena)
}
- Internally:odin
arena.curr_block = free_block.prev
arena.total_capacity -= free_block.capacity
- Zero the extra memory used during the scope:odin
amount_to_zero := block.used-temp.used
intrinsics.mem_zero(block.base[temp.used:], amount_to_zero)
- Revert the `arena.curr_block.used` and `arena.total_used`odin
block.used = temp.used //blockisarena.curr_blockin this case.
arena.total_used -= amount_to_zero
``` -
-
Guard :
-
This one is used A LOT in the
corelibrary. -
The return value from this procedure is never handled on purpose. The only reason there is a return is to send it to the
default_temp_allocator_temp_endon exiting the scope. The user doesn't usually care about theArena_Temp.
@(deferred_out=default_temp_allocator_temp_end) DEFAULT_TEMP_ALLOCATOR_TEMP_GUARD :: #force_inline proc(ignore := false, loc := #caller_location) -> (Arena_Temp, Source_Code_Location) { if ignore { return {}, loc } else { return default_temp_allocator_temp_begin(loc), loc } } -
-
Ignore :
-
Ignore the use of a
arena_temp_beginentirely. -
The
ignoreis usually used like so, for example:
runtime.DEFAULT_TEMP_ALLOCATOR_TEMP_GUARD(ignore = context.temp_allocator == context.allocator)arena_temp_ignore :: proc(temp: Arena_Temp, loc := #caller_location) { assert(temp.arena != nil, "nil arena", loc) arena := temp.arena assert(arena.temp_count > 0, "double-use of arena_temp_end", loc) arena.temp_count -= 1 } -
-
Check :
arena_check_temp :: proc(arena: ^Arena, loc := #caller_location) { assert(arena.temp_count == 0, "Arena_Temp not been ended", loc) }
Scratch Allocator
-
The scratch allocator works in a similar way to the
Arenaallocator. -
It has a backing buffer that is allocated in contiguous regions, from start to end.
-
Each subsequent allocation will be the next adjacent region of memory in the backing buffer.
-
If the allocation doesn't fit into the remaining space of the backing buffer, this allocation is put at the start of the buffer, and all previous allocations will become invalidated.
-
If doesn't fit :
-
If the allocation doesn't fit into the backing buffer as a whole, it will be allocated using a backing allocator, and the pointer to the allocated memory region will be put into the
leaked_allocationsarray. AWarning-level log message will be sent as well. -
The
leaked_allocationsarray is managed by thecontextallocator if nobackup_allocatoris specified inscratch_init.
-
@(require_results)
scratch_allocator :: proc(allocator: ^Scratch) -> Allocator {
return Allocator{
procedure = scratch_allocator_proc,
data = allocator,
}
}
Resize
-
Allocations which are resized will be resized in-place if they were the last allocation. Otherwise, they are re-allocated to avoid overwriting previous allocations.
Stack Allocator (LIFO)
-
The stack allocator is an allocator that allocates data in the backing buffer linearly, from start to end. Each subsequent allocation will get the next adjacent memory region.
-
Unlike arena allocator, the stack allocator saves allocation metadata and has a strict freeing order. Only the last allocated element can be freed. After the last allocated element is freed, the next previous allocated element becomes available for freeing.
-
The metadata is stored in the allocation headers, that are located before the start of each allocated memory region. Each header points to the start of the previous allocation header.
-
A stack-like allocator means that the allocator acts like a data structure following the last-in, first-out (LIFO) principle.
-
This has nothing to do with the stack or the stack frame.
-
Evolution of an Arena Allocator
-
As with the arena allocator, an offset into the memory block will be stored and will be moved forwards on every allocation.
-
The difference is that the offset can also be moved backwards when memory is freed. With an arena, you could only free all the memory all at once.
-
Stack :: struct {
data: []byte,
prev_offset: int,
curr_offset: int,
peak_used: int,
}
Stack_Allocation_Header :: struct {
prev_offset: int,
padding: int,
}
@(require_results)
stack_allocator :: proc(stack: ^Stack) -> Allocator {
return Allocator{
procedure = stack_allocator_proc,
data = stack,
}
}
Header
-
The offset of the previous allocation needs to be tracked. This is required in order to free memory on a per-allocation basis.
-
One approach is to store a header which stores information about that allocation. This header allows the allocator to know how far back it should move the offset to free that memory.
-
The stack allocator is the first of many allocators that will use the concept of a header for allocations.
-
-
To allocate some memory from the stack allocator, as with the arena allocator, it is as simple as moving the offset forward while accounting for the header. In Big-O notation, the allocation has complexity of O(1) (constant).
-
To free a block, the header stored before the block of memory can be read in order to move the offset backwards. In Big-O notation, freeing this memory has complexity of O(1) (constant).
-
What's stored in the header :
-
There are three main approaches:
-
Store the padding from the previous offset
-
Store the previous offset
-
Store the size of the allocation
-
-
Implementation
-
See the article Stack Allocator - Ginger Bill .
-
Improvements :
-
You can extend the stack allocator even further by having two different offsets: one that starts at the beginning and increments forwards, and another that starts at the end and increments backwards. This is called a double-ended stack and allows for the maximization of memory usage whilst keeping fragmentation extremely low (as long as the offsets never overlap).
-
Small Stack Allocator
-
The small stack allocator is just like a
Stackallocator, with the only difference being an extremely small header size. -
Unlike the stack allocator, the small stack allows out-of order freeing of memory, with the stipulation that all allocations made after the freed allocation will become invalidated upon following allocations as they will begin to overwrite the memory formerly used by the freed allocation.
-
The memory is allocated in the backing buffer linearly, from start to end. Each subsequent allocation will get the next adjacent memory region.
-
The metadata is stored in the allocation headers, that are located before the start of each allocated memory region. Each header contains the amount of padding bytes between that header and end of the previous allocation.
Buddy Memory Allocation
-
The buddy allocator is a type of allocator that splits the backing buffer into multiple regions called buddy blocks .
-
Initially, the allocator only has one block with the size of the backing buffer.
-
Upon each allocation, the allocator finds the smallest block that can fit the size of requested memory region, and splits the block according to the allocation size. If no block can be found, the contiguous free blocks are coalesced and the search is performed again.
-
The buddy allocator is a powerful allocator and a conceptually simple algorithm, but implementing it efficiently is a lot harder than all of the previous allocators above.
-
The Buddy Algorithm assumes that the backing memory block is a power-of-two in bytes.
-
When an allocation is requested, the allocator looks for a block whose size is at least the size of the requested allocation (similar to a free list).
-
If the requested allocation size is less than half of the block, it is split into two (left and right), and the two resulting blocks are called “buddies.”
-
If this requested allocation size is still less than half the size of the left buddy, the buddy block is recursively split until the resulting buddy is as small as possible to fit the requested allocation size.
-
When a block is released, we can try to perform coalescence on buddies (contiguous neighboring blocks).
-
Similar to free lists, there are specific conditions that must be met. Coalescence cannot be performed if a block has no (free) buddy, the block is still in use, or the buddy block is partially used.
Buddy_Block :: struct #align(align_of(uint)) {
size: uint,
is_free: bool,
}
Buddy_Allocator :: struct {
head: ^Buddy_Block,
tail: ^Buddy_Block `fmt:"-"`,
alignment: uint,
}
Pool Allocator
-
A pool splits the supplied backing buffer into chunks of equal size and keeps track of which of the chunks are free.
-
When an allocation is requested, a free chunk is given.
-
When a chunk is freed, it adds that chunk back to the list of free chunks.
-
-
Pool allocators are extremely useful when you need to allocate chunks of memory of the same size that are created and destroyed dynamically, especially in a random order.
-
Pools also have the benefit that arenas and stacks have in that they provide very little fragmentation and allocate/free in constant time O(1) .
-
Pool allocators are usually used to allocate groups of “things” at once which share the same lifetime.
-
An example could be within a game that creates and destroys entities in batches where each entity within a batch shares the same lifetime.
-
-
Free List :
-
A free list is a data structure that internally stores a linked list of the free slots/chunks within the memory buffer.
-
The nodes of the list are stored in-place, meaning there is no need for an additional data structure (e.g., array, list, etc.) to keep track of the free slots.
-
The data is only stored within the backing buffer of the pool allocator.
-
The general approach is to store a header at the beginning of the chunk (not before the chunk like with the stack allocator) which points to the next available free chunk.
-
Implementation
General Purpose: Free List Based Allocator
-
A free list is a general-purpose allocator which, compared to the other allocators we previously looked at, does not impose any restrictions.
-
It allows allocations and deallocations to be out of order and of any size.
-
Due to its nature, the allocator’s performance is not as good as the others previously discussed in this series.
Implementation
-
There are two common approaches to implementing a free list allocator:
-
Using a linked list
-
Using a red-black tree .
-
-
See the article for the implementations.
General Purpose: Heap Allocator
-
Heap Allocators are a high level construct, and a specific kind of allocator.
-
Odin just generalizes the concept of an allocator.
-
A heap in general is a data structure and in the context of allocators it is a "general purpose allocator". Most common heap allocators are built on top of allocating virtual memory directly. The point of the "general purpose" aspect means you can allocate "things" of varying size, alignment, and free them at arbitrary times (i.e. the lifetimes of each allocation is out of order). And to do this, they require storing some sort of metadata about the size of the allocation, and where the free allocations are (called a free list). More complicated algorithms do more things to be more efficient.
@(require_results)
heap_allocator :: proc() -> Allocator {
return Allocator{
procedure = heap_allocator_proc,
data = nil,
}
}
In
os2
-
The
heap_allocatoris redefined internally if using Windows. -
Barinzaya:
-
I'd guess probably because
runtime.heap_allocatormay eventually become an Odin-implemented heap allocator , andos.heap_allocatoris intended to specifically use the underlying OS allocator (whichruntime.heap_allocatorcurrently also is). -
This is done so
os.heap_allocatoris the OS's heap allocator . -
As for
os2using its own allocators instead ofcontextones... OS Stuff is Different™ is the usual reply I've seen.
-
Using
heap_allocator()
-
The procedure uses
data = nil, while theheap_allocator_procdoesn't use theallocator_data: rawptr. This means that every call toheap_allocatoruses the same backing region from the OS heap allocator implemented. -
Example:
a := runtime.heap_allocator() b := runtime.heap_allocator()-
aandbare the same. There's no newmmap, or etc, being made.
-
Is thread-safe?
-
Yes.
-
It's just uses what the OS provides. which generally are, yes. And when we have our own malloc implementation, it'll be thread-safe too.
-
The current PR for it: https://github.com/odin-lang/Odin/pull/4749 .
-
-
ChatGPT: "The C standard library implementations of
malloc,calloc,realloc, andfreeprovided by all mainstream libc variants (glibc, musl, BSD libc, Windows CRT, etc.) are thread-safe. They use internal locking or per-thread arenas to avoid corruption."
Allocator Proc
heap_allocator_proc :: proc(allocator_data: rawptr, mode: Allocator_Mode,
size, alignment: int,
old_memory: rawptr, old_size: int, loc := #caller_location) -> ([]byte, Allocator_Error) {
// NOTE(tetra, 2020-01-14): The heap doesn't respect alignment.
// Instead, we overallocate by `alignment + size_of(rawptr) - 1`, and insert
// padding. We also store the original pointer returned by heap_alloc right before
// the pointer we return to the user.
aligned_alloc :: proc(size, alignment: int, old_ptr: rawptr, old_size: int, zero_memory := true) -> ([]byte, Allocator_Error) {
// Not(flysand): We need to reserve enough space for alignment, which
// includes the user data itself, the space to store the pointer to
// allocation start, as well as the padding required to align both
// the user data and the pointer.
a := max(alignment, align_of(rawptr))
space := a-1 + size_of(rawptr) + size
allocated_mem: rawptr
force_copy := old_ptr != nil && alignment > align_of(rawptr)
if old_ptr != nil && !force_copy {
original_old_ptr := ([^]rawptr)(old_ptr)[-1]
allocated_mem = heap_resize(original_old_ptr, space)
} else {
allocated_mem = heap_alloc(space, zero_memory)
}
aligned_mem := rawptr(([^]u8)(allocated_mem)[size_of(rawptr):])
ptr := uintptr(aligned_mem)
aligned_ptr := (ptr + uintptr(a)-1) & ~(uintptr(a)-1)
if allocated_mem == nil {
aligned_free(old_ptr)
aligned_free(allocated_mem)
return nil, .Out_Of_Memory
}
aligned_mem = rawptr(aligned_ptr)
([^]rawptr)(aligned_mem)[-1] = allocated_mem
if force_copy {
mem_copy_non_overlapping(aligned_mem, old_ptr, min(old_size, size))
aligned_free(old_ptr)
}
return byte_slice(aligned_mem, size), nil
}
aligned_free :: proc(p: rawptr) {
if p != nil {
heap_free(([^]rawptr)(p)[-1])
}
}
aligned_resize :: proc(p: rawptr, old_size: int, new_size: int, new_alignment: int, zero_memory := true) -> (new_memory: []byte, err: Allocator_Error) {
if p == nil {
return aligned_alloc(new_size, new_alignment, nil, old_size, zero_memory)
}
new_memory = aligned_alloc(new_size, new_alignment, p, old_size, zero_memory) or_return
when ODIN_OS != .Windows {
// NOTE: heap_resize does not zero the new memory, so we do it
if zero_memory && new_size > old_size {
new_region := raw_data(new_memory[old_size:])
conditional_mem_zero(new_region, new_size - old_size)
}
}
return
}
switch mode {
case .Alloc, .Alloc_Non_Zeroed:
return aligned_alloc(size, alignment, nil, 0, mode == .Alloc)
case .Free:
aligned_free(old_memory)
case .Free_All:
return nil, .Mode_Not_Implemented
case .Resize, .Resize_Non_Zeroed:
return aligned_resize(old_memory, old_size, size, alignment, mode == .Resize)
case .Query_Features:
set := (^Allocator_Mode_Set)(old_memory)
if set != nil {
set^ = {.Alloc, .Alloc_Non_Zeroed, .Free, .Resize, .Resize_Non_Zeroed, .Query_Features}
}
return nil, nil
case .Query_Info:
return nil, .Mode_Not_Implemented
}
return nil, nil
}
Alloc
heap_alloc :: proc "contextless" (size: int, zero_memory := true) -> rawptr {
return _heap_alloc(size, zero_memory)
}
-
Linux:
@(default_calling_convention="c") foreign libc { @(link_name="malloc") _unix_malloc :: proc(size: int) -> rawptr --- @(link_name="calloc") _unix_calloc :: proc(num, size: int) -> rawptr --- } _heap_alloc :: proc "contextless" (size: int, zero_memory := true) -> rawptr { if size <= 0 { return nil } if zero_memory { return _unix_calloc(1, size) } else { return _unix_malloc(size) } }-
Uses the C library allocator (
malloc,calloc) layered overbrkormmapsystem calls. -
The kernel itself does not expose a "heap" API to user space.
-
Each C library (glibc, musl, jemalloc, etc.) implements its own allocator strategy.
-
-
Windows:
_heap_alloc :: proc "contextless" (size: int, zero_memory := true) -> rawptr { HEAP_ZERO_MEMORY :: 0x00000008 ptr := HeapAlloc(GetProcessHeap(), HEAP_ZERO_MEMORY if zero_memory else 0, uint(size)) // NOTE(lucas): asan not guarunteed to unpoison win32 heap out of the box, do it ourselves sanitizer.address_unpoison(ptr, size) return ptr }-
The heap system (
HeapAlloc,HeapFree, etc.) is part of the Win32 API , built over the NT kernel’s virtual memory manager. -
Each process has one or more heaps managed by the kernel.
-
HeapAlloc(GetProcessHeap(), ...)allocates from the process heap directly, with flags controlling behavior (e.g.,HEAP_ZERO_MEMORYfor zeroing). -
This unifies allocation across the system and avoids relying on C runtime internals, which can differ between MSVC, MinGW, etc.
-
Resize
heap_resize :: proc "contextless" (ptr: rawptr, new_size: int) -> rawptr {
return _heap_resize(ptr, new_size)
}
-
Linux:
@(default_calling_convention="c") foreign libc { @(link_name="realloc") _unix_realloc :: proc(ptr: rawptr, size: int) -> rawptr --- } _heap_resize :: proc "contextless" (ptr: rawptr, new_size: int) -> rawptr { // NOTE: _unix_realloc doesn't guarantee new memory will be zeroed on // POSIX platforms. Ensure your caller takes this into account. return _unix_realloc(ptr, new_size) } -
Windows:
_heap_resize :: proc "contextless" (ptr: rawptr, new_size: int) -> rawptr { if new_size == 0 { _heap_free(ptr) return nil } if ptr == nil { return _heap_alloc(new_size) } HEAP_ZERO_MEMORY :: 0x00000008 new_ptr := HeapReAlloc(GetProcessHeap(), HEAP_ZERO_MEMORY, ptr, uint(new_size)) // NOTE(lucas): asan not guarunteed to unpoison win32 heap out of the box, do it ourselves sanitizer.address_unpoison(new_ptr, new_size) return new_ptr }
Free
heap_free :: proc "contextless" (ptr: rawptr) {
_heap_free(ptr)
}
-
Linux:
@(default_calling_convention="c") foreign libc { @(link_name="free") _unix_free :: proc(ptr: rawptr) --- } _heap_free :: proc "contextless" (ptr: rawptr) { _unix_free(ptr) } -
Windows:
_heap_free :: proc "contextless" (ptr: rawptr) { if ptr == nil { return } HeapFree(GetProcessHeap(), 0, ptr) }
Compact Allocator
-
An allocator that keeps track of allocation sizes and passes it along to resizes.
-
This is useful if you are using a library that needs an equivalent of
reallocbut want to use the Odin allocator interface. -
You want to wrap your allocator into this one if you are trying to use any allocator that relies on the old size to work.
-
The overhead of this allocator is an extra
max(alignment, size_of(Header))bytes allocated for each allocation, these bytes are used to store the size and alignment.
Compat_Allocator :: struct {
parent: Allocator,
}
Allocator Procedure
compat_allocator_proc :: proc(allocator_data: rawptr, mode: Allocator_Mode,
size, alignment: int,
old_memory: rawptr, old_size: int,
location := #caller_location) -> (data: []byte, err: Allocator_Error) {
Header :: struct {
size: int,
alignment: int,
}
@(no_sanitize_address)
get_unpoisoned_header :: #force_inline proc(ptr: rawptr) -> Header {
header := ([^]Header)(ptr)[-1]
// a := max(header.alignment, size_of(Header))
// sanitizer.address_unpoison(rawptr(uintptr(ptr)-uintptr(a)), a)
return header
}
rra := (^Compat_Allocator)(allocator_data)
switch mode {
case .Alloc, .Alloc_Non_Zeroed:
a := max(alignment, size_of(Header))
req_size := size + a
assert(req_size >= 0, "overflow")
allocation := rra.parent.procedure(rra.parent.data, mode, req_size, alignment, old_memory, old_size, location) or_return
#no_bounds_check data = allocation[a:]
([^]Header)(raw_data(data))[-1] = {
size = size,
alignment = alignment,
}
// sanitizer.address_poison(raw_data(allocation), a)
return
case .Free:
header := get_unpoisoned_header(old_memory)
a := max(header.alignment, size_of(Header))
orig_ptr := rawptr(uintptr(old_memory)-uintptr(a))
orig_size := header.size + a
return rra.parent.procedure(rra.parent.data, mode, orig_size, header.alignment, orig_ptr, orig_size, location)
case .Resize, .Resize_Non_Zeroed:
header := get_unpoisoned_header(old_memory)
orig_a := max(header.alignment, size_of(Header))
orig_ptr := rawptr(uintptr(old_memory)-uintptr(orig_a))
orig_size := header.size + orig_a
new_alignment := max(header.alignment, alignment)
a := max(new_alignment, size_of(header))
req_size := size + a
assert(size >= 0, "overflow")
allocation := rra.parent.procedure(rra.parent.data, mode, req_size, new_alignment, orig_ptr, orig_size, location) or_return
#no_bounds_check data = allocation[a:]
([^]Header)(raw_data(data))[-1] = {
size = size,
alignment = new_alignment,
}
// sanitizer.address_poison(raw_data(allocation), a)
return
case .Free_All:
return rra.parent.procedure(rra.parent.data, mode, size, alignment, old_memory, old_size, location)
case .Query_Info:
info := (^Allocator_Query_Info)(old_memory)
if info != nil && info.pointer != nil {
header := get_unpoisoned_header(info.pointer)
info.size = header.size
info.alignment = header.alignment
}
return
case .Query_Features:
data, err = rra.parent.procedure(rra.parent.data, mode, size, alignment, old_memory, old_size, location)
if err != nil {
set := (^Allocator_Mode_Set)(old_memory)
set^ += {.Query_Info}
}
return
case: unreachable()
}
}
Mutex Allocator
-
The mutex allocator is a wrapper for allocators that is used to serialize all allocator requests across multiple threads.
Mutex_Allocator :: struct {
backing: Allocator,
mutex: sync.Mutex,
}
@(require_results)
mutex_allocator :: proc(m: ^Mutex_Allocator) -> Allocator {
return Allocator{
procedure = mutex_allocator_proc,
data = m,
}
}
Allocator Procedure
mutex_allocator_proc :: proc(
allocator_data: rawptr,
mode: Allocator_Mode,
size: int,
alignment: int,
old_memory: rawptr,
old_size: int,
loc := #caller_location,
) -> (result: []byte, err: Allocator_Error) {
m := (^Mutex_Allocator)(allocator_data)
sync.mutex_guard(&m.mutex)
return m.backing.procedure(m.backing.data, mode, size, alignment, old_memory, old_size, loc)
}
Rollback Stack Allocator
-
The Rollback Stack Allocator was designed for the test runner to be fast, able to grow, and respect the Tracking Allocator's requirement for individual frees. It is not overly concerned with fragmentation, however.
-
It has support for expansion when configured with a block allocator and limited support for out-of-order frees.
-
Allocation has constant-time best and usual case performance. At worst, it is linear according to the number of memory blocks.
-
Allocation follows a first-fit strategy when there are multiple memory blocks.
-
Freeing has constant-time best and usual case performance. At worst, it is linear according to the number of memory blocks and number of freed items preceding the last item in a block.
-
Resizing has constant-time performance, if it's the last item in a block, or the new size is smaller. Naturally, this becomes linear-time if there are multiple blocks to search for the pointer's owning block. Otherwise, the allocator defaults to a combined alloc & free operation internally.
-
Out-of-order freeing is accomplished by collapsing a run of freed items from the last allocation backwards.
-
Each allocation has an overhead of 8 bytes and any extra bytes to satisfy the requested alignment.
Rollback_Stack_Block :: struct {
next_block: ^Rollback_Stack_Block,
last_alloc: rawptr,
offset: uintptr,
buffer: []byte,
}
Rollback_Stack :: struct {
head: ^Rollback_Stack_Block,
block_size: int,
block_allocator: Allocator,
}
WASM Allocator
WASM_Allocator :: struct {
// The minimum alignment of allocations.
alignment: uint,
// A region that contains as payload a single forward linked list of pointers to
// root regions of each disjoint region blocks.
list_of_all_regions: ^Root_Region,
// For each of the buckets, maintain a linked list head node. The head node for each
// free region is a sentinel node that does not actually represent any free space, but
// the sentinel is used to avoid awkward testing against (if node == freeRegionHeadNode)
// when adding and removing elements from the linked list, i.e. we are guaranteed that
// the sentinel node is always fixed and there, and the actual free region list elements
// start at free_region_buckets[i].next each.
free_region_buckets: [NUM_FREE_BUCKETS]Region,
// A bitmask that tracks the population status for each of the 64 distinct memory regions:
// a zero at bit position i means that the free list bucket i is empty. This bitmask is
// used to avoid redundant scanning of the 64 different free region buckets: instead by
// looking at the bitmask we can find in constant time an index to a free region bucket
// that contains free memory of desired size.
free_region_buckets_used: BUCKET_BITMASK_T,
// Because wasm memory can only be allocated in pages of 64k at a time, we keep any
// spilled/unused bytes that are left from the allocated pages here, first using this
// when bytes are needed.
spill: []byte,
// Mutex for thread safety, only used if the target feature "atomics" is enabled.
mu: Mutex_State,
}
Tracking Allocator
-
The tracking allocator is an allocator wrapper that tracks memory allocations.
-
This allocator stores all the allocations in a map.
-
Whenever a pointer that's not inside of the map is freed, the
bad_free_arrayentry is added.
Tracking_Allocator :: struct {
backing: Allocator,
allocation_map: map[rawptr]Tracking_Allocator_Entry,
bad_free_callback: Tracking_Allocator_Bad_Free_Callback,
bad_free_array: [dynamic]Tracking_Allocator_Bad_Free_Entry,
mutex: sync.Mutex,
clear_on_free_all: bool,
total_memory_allocated: i64,
total_allocation_count: i64,
total_memory_freed: i64,
total_free_count: i64,
peak_memory_allocated: i64,
current_memory_allocated: i64,
}
Demo
-
Using the Tracking Allocator to detect memory leaks and double free {18:29 -> 30:40} .
-
Very interesting.
-
The example includes RayLib.
-
package foo
import "core:mem"
import "core:fmt"
main :: proc() {
track: mem.Tracking_Allocator
mem.tracking_allocator_init(&track, context.allocator)
defer mem.tracking_allocator_destroy(&track)
context.allocator = mem.tracking_allocator(&track)
do_stuff()
for _, leak in track.allocation_map {
fmt.printf("%v leaked %m\n", leak.location, leak.size)
}
}
Limitations
-
"I'm using the Track Allocator to know where I'm getting memory leaks, but it keeps saying the leak happened at
C:/odin/core/strings/builder.odin(171:11) leaked 8 bytes, but I have no idea what's the call stack, so I'm revisiting everything."-
"It does attempt to log the location where the allocation was done, but it relies on the appropriate location being passed through. Unfortunately, that's not always possible , e.g., the
io.Streaminterface doesn't pass a location so when using astrings.Builderas anio.Stream(or anything else thatStreams to dynamic memory), it can't easily track where it originated in your code
-
-
Virtual Arenas :
-
virtual.Arenadoesn't use anAllocatorfor its backing memory; it makes direct calls to the OS's virtual memory interface. So aTracking_Allocatorcan't be used to back it. -
You can use a
Tracking_Allocatorthat wraps theArena, and theTracking_Allocatorcan interpretfree_allon it correctly (you'd have tofree_allbefore youdestroythe arena, otherwise the tracking allocator will see it as leaking), but personally I don't see the value of using a tracking allocator on allocations made from an arena (regardless of which one).
-
Allocator Procedure
@(no_sanitize_address)
tracking_allocator_proc :: proc(
allocator_data: rawptr,
mode: Allocator_Mode,
size, alignment: int,
old_memory: rawptr,
old_size: int,
loc := #caller_location,
) -> (result: []byte, err: Allocator_Error) {
@(no_sanitize_address)
track_alloc :: proc(data: ^Tracking_Allocator, entry: ^Tracking_Allocator_Entry) {
data.total_memory_allocated += i64(entry.size)
data.total_allocation_count += 1
data.current_memory_allocated += i64(entry.size)
if data.current_memory_allocated > data.peak_memory_allocated {
data.peak_memory_allocated = data.current_memory_allocated
}
}
@(no_sanitize_address)
track_free :: proc(data: ^Tracking_Allocator, entry: ^Tracking_Allocator_Entry) {
data.total_memory_freed += i64(entry.size)
data.total_free_count += 1
data.current_memory_allocated -= i64(entry.size)
}
data := (^Tracking_Allocator)(allocator_data)
sync.mutex_guard(&data.mutex)
if mode == .Query_Info {
info := (^Allocator_Query_Info)(old_memory)
if info != nil && info.pointer != nil {
if entry, ok := data.allocation_map[info.pointer]; ok {
info.size = entry.size
info.alignment = entry.alignment
}
info.pointer = nil
}
return
}
if mode == .Free && old_memory != nil && old_memory not_in data.allocation_map {
if data.bad_free_callback != nil {
data.bad_free_callback(data, old_memory, loc)
}
} else {
result = data.backing.procedure(data.backing.data, mode, size, alignment, old_memory, old_size, loc) or_return
}
result_ptr := raw_data(result)
if data.allocation_map.allocator.procedure == nil {
data.allocation_map.allocator = context.allocator
}
switch mode {
case .Alloc, .Alloc_Non_Zeroed:
data.allocation_map[result_ptr] = Tracking_Allocator_Entry{
memory = result_ptr,
size = size,
mode = mode,
alignment = alignment,
err = err,
location = loc,
}
track_alloc(data, &data.allocation_map[result_ptr])
case .Free:
if old_memory != nil && old_memory in data.allocation_map {
track_free(data, &data.allocation_map[old_memory])
}
delete_key(&data.allocation_map, old_memory)
case .Free_All:
if data.clear_on_free_all {
clear_map(&data.allocation_map)
data.current_memory_allocated = 0
}
case .Resize, .Resize_Non_Zeroed:
if old_memory != nil && old_memory in data.allocation_map {
track_free(data, &data.allocation_map[old_memory])
}
if old_memory != result_ptr {
delete_key(&data.allocation_map, old_memory)
}
data.allocation_map[result_ptr] = Tracking_Allocator_Entry{
memory = result_ptr,
size = size,
mode = mode,
alignment = alignment,
err = err,
location = loc,
}
track_alloc(data, &data.allocation_map[result_ptr])
case .Query_Features:
set := (^Allocator_Mode_Set)(old_memory)
if set != nil {
set^ = {.Alloc, .Alloc_Non_Zeroed, .Free, .Free_All, .Resize, .Query_Features, .Query_Info}
}
return nil, nil
case .Query_Info:
unreachable()
}
return
}
Memory: Operations
Mem Alloc
-
This function allocates
sizebytes of memory, aligned to a boundary specified byalignmentusing the allocator specified byallocator. -
If the
sizeparameter is0, the operation is a no-op. -
Inputs :
-
size: The desired size of the allocated memory region. -
alignment: The desired alignment of the allocated memory region. -
allocator: The allocator to allocate from.
-
-
core:mem
@(require_results)
alloc :: proc(
size: int,
alignment: int = DEFAULT_ALIGNMENT,
allocator := context.allocator,
loc := #caller_location,
) -> (rawptr, Allocator_Error) {
data, err := runtime.mem_alloc(size, alignment, allocator, loc)
return raw_data(data), err
}
@(require_results)
alloc_bytes :: proc(
size: int,
alignment: int = DEFAULT_ALIGNMENT,
allocator := context.allocator,
loc := #caller_location,
) -> ([]byte, Allocator_Error) {
return runtime.mem_alloc(size, alignment, allocator, loc)
}
@(require_results)
alloc_bytes_non_zeroed :: proc(
size: int,
alignment: int = DEFAULT_ALIGNMENT,
allocator := context.allocator,
loc := #caller_location,
) -> ([]byte, Allocator_Error) {
return runtime.mem_alloc_non_zeroed(size, alignment, allocator, loc)
}
-
base:runtime
mem_alloc :: #force_no_inline proc(size: int, alignment: int = DEFAULT_ALIGNMENT, allocator := context.allocator, loc := #caller_location) -> ([]byte, Allocator_Error) {
assert(is_power_of_two_int(alignment), "Alignment must be a power of two", loc)
if size == 0 || allocator.procedure == nil {
return nil, nil
}
return allocator.procedure(allocator.data, .Alloc, size, alignment, nil, 0, loc)
}
mem_alloc_bytes :: #force_no_inline proc(size: int, alignment: int = DEFAULT_ALIGNMENT, allocator := context.allocator, loc := #caller_location) -> ([]byte, Allocator_Error) {
assert(is_power_of_two_int(alignment), "Alignment must be a power of two", loc)
if size == 0 || allocator.procedure == nil{
return nil, nil
}
return allocator.procedure(allocator.data, .Alloc, size, alignment, nil, 0, loc)
}
New
-
Allocates a single object.
-
Returns a pointer to a newly allocated value of that type using the specified allocator.
-
base:builtin
new
@(builtin, require_results)
new :: proc($T: typeid, allocator := context.allocator, loc := #caller_location) -> (t: ^T, err: Allocator_Error) #optional_allocator_error {
t = (^T)(raw_data(mem_alloc_bytes(size_of(T), align_of(T), allocator, loc) or_return))
return
}
-
Example :
ptr := new(int) ptr^ = 123 x: int = ptr^
new_aligned
@(require_results)
new_aligned :: proc($T: typeid, alignment: int, allocator := context.allocator, loc := #caller_location) -> (t: ^T, err: Allocator_Error) {
t = (^T)(raw_data(mem_alloc_bytes(size_of(T), alignment, allocator, loc) or_return))
return
}
new_clone
-
Allocates a clone of the value passed to it.
-
The resulting value of the type will be a pointer to the type of the value passed.
@(builtin, require_results)
new_clone :: proc(data: $T, allocator := context.allocator, loc := #caller_location) -> (t: ^T, err: Allocator_Error) #optional_allocator_error {
t = (^T)(raw_data(mem_alloc_bytes(size_of(T), align_of(T), allocator, loc) or_return))
if t != nil {
t^ = data
}
return
}
-
Example :
ptr: ^int = new_clone(123) assert(ptr^ == 123)
Mem Free
-
Free a single object (opposite of
new) -
Will try to free the passed pointer, with the given
allocatorif the allocator supports this operation. -
Only free memory with the allocator it was allocated with.
Cautions
-
Trying to free an object that is "zero-initialized" will not cause a "bad-free".
-
free(&...)will almost always be wrong, or at best unnecessary.-
If you need to use
&to get a pointer to something, then that something probably isn't allocated at all. -
If it were allocated, you'd already have a pointer. e.g., in that example,
free(&d)would be trying to free a pointer to the stack--that'll never end well. -
For built-in types like slices, dynamic arrays, maps, and strings, use
deletefor those instead. They're not pointers themselves, but they have a pointer internally.
-
Procs
-
base:builtin
@builtin
free :: proc{mem_free}
-
base:runtime
mem_free :: #force_no_inline proc(ptr: rawptr, allocator := context.allocator, loc := #caller_location) -> Allocator_Error {
if ptr == nil || allocator.procedure == nil {
return nil
}
_, err := allocator.procedure(allocator.data, .Free, 0, 0, ptr, 0, loc)
return err
}
mem_free_with_size :: #force_no_inline proc(ptr: rawptr, byte_count: int, allocator := context.allocator, loc := #caller_location) -> Allocator_Error {
if ptr == nil || allocator.procedure == nil {
return nil
}
_, err := allocator.procedure(allocator.data, .Free, 0, 0, ptr, byte_count, loc)
return err
}
mem_free_bytes :: #force_no_inline proc(bytes: []byte, allocator := context.allocator, loc := #caller_location) -> Allocator_Error {
if bytes == nil || allocator.procedure == nil {
return nil
}
_, err := allocator.procedure(allocator.data, .Free, 0, 0, raw_data(bytes), len(bytes), loc)
return err
}
-
core:mem
free :: proc(
ptr: rawptr,
allocator := context.allocator,
loc := #caller_location,
) -> Allocator_Error {
return runtime.mem_free(ptr, allocator, loc)
}
Mem Free All
-
Will try to free/reset all of the memory of the given
allocatorif the allocator supports this operation. -
base:builtin
// `free_all` will try to free/reset all of the memory of the given `allocator` if the allocator supports this operation.
@builtin
free_all :: proc{mem_free_all}
-
base:runtime
mem_free_all :: #force_no_inline proc(allocator := context.allocator, loc := #caller_location) -> (err: Allocator_Error) {
if allocator.procedure != nil {
_, err = allocator.procedure(allocator.data, .Free_All, 0, 0, nil, 0, loc)
}
return
}
Make
-
Allocates and initializes a value of type slice, dynamic array, map, or multi-pointer (only).
-
Unlike
new,make's return value is the same as the type of its argument, not a pointer to it. -
Like
new, the first argument is a type, not a value. -
Uses the specified allocator; the default is
context.allocator. -
base:builtin
@builtin
make :: proc{
make_slice,
make_dynamic_array,
make_dynamic_array_len,
make_dynamic_array_len_cap,
make_map,
make_map_cap,
make_multi_pointer,
make_soa_slice,
make_soa_dynamic_array,
make_soa_dynamic_array_len,
make_soa_dynamic_array_len_cap,
}
-
make_aligned-
Not included in
make.
-
-
make_soa_aligned-
Not included in
make.
-
Examples
slice := make([]int, 65)
dynamic_array_zero_length := make([dynamic]int)
dynamic_array_with_length := make([dynamic]int, 32)
dynamic_array_with_length_and_capacity := make([dynamic]int, 16, 64)
made_map := make(map[string]int)
made_map_with_reservation := make(map[string]int, 64)
Allocation in structs
-
Caio:
-
How can I ensure that the
[dynamic]arrays in the structphysics_packetbelow use a custom allocator?
Physics_Packet :: struct { tick_number: u64, data: Physics_Data, } Physics_Data :: struct { characters: [dynamic]Character_Data, creatures: [dynamic]Creature_Data, } physics_packet: Physics_Packet-
When doing something like
append(&physics_packet.data.characters, { id = personagem.socket, pos = personagem.pos })-
How do I not use the
context.allocatorbut a custom allocator? I created the struct just by doingphysics_packet: Physics_Packet, so which allocator is used to define the arrayscharactersandcreatures?
-
-
Barinzaya:
-
Use
physics_packet.data.characters = make([dynamic]Character_Data, allocator=custom_alloc)
-
-
Chamberlain:
-
The first append actually does the allocation unless you do it explicitly with make.
-
-
Caio:
-
What about ZII? Shouldn't I delete the "previous array" before assigning it with a
make? Or even, is there a way that I define the whole struct using a custom allocator, without having to redefine the arrays, just to use a different allocator? There's also the question of which allocator is used when creating the struct.
-
-
Chamberlain:
-
No allocator is used when creating the struct.
-
-
Barinzaya:
-
If you declared
physics_packetas a local variable, then it'll be zero-initialized on the stack. No allocation will happen (depending on how technical you are about "allocation"; technically it was allocated on the stack--but that's completely unrelated toAllocators). That includes the dynamic arrays in it, where "zero" means "no pointer, 0 length, no allocator".
-
slice
-
Allocates and initializes a slice
@(require_results)
make_aligned :: proc($T: typeid/[]$E, #any_int len: int, alignment: int, allocator := context.allocator, loc := #caller_location) -> (res: T, err: Allocator_Error) #optional_allocator_error {
err = _make_aligned_type_erased(&res, size_of(E), len, alignment, allocator, loc)
return
}
@(require_results)
_make_aligned_type_erased :: proc(slice: rawptr, elem_size: int, len: int, alignment: int, allocator: Allocator, loc := #caller_location) -> Allocator_Error {
make_slice_error_loc(loc, len)
data, err := mem_alloc_bytes(elem_size*len, alignment, allocator, loc)
if data == nil && elem_size != 0 {
return err
}
(^Raw_Slice)(slice).data = raw_data(data)
(^Raw_Slice)(slice).len = len
return err
}
@(builtin, require_results)
make_slice :: proc($T: typeid/[]$E, #any_int len: int, allocator := context.allocator, loc := #caller_location) -> (res: T, err: Allocator_Error) #optional_allocator_error {
err = _make_aligned_type_erased(&res, size_of(E), len, align_of(E), allocator, loc)
return
}
dynamic array
-
Allocates and initializes a dynamic array.
@(builtin, require_results)
make_dynamic_array :: proc($T: typeid/[dynamic]$E, allocator := context.allocator, loc := #caller_location) -> (array: T, err: Allocator_Error) #optional_allocator_error {
err = _make_dynamic_array_len_cap((^Raw_Dynamic_Array)(&array), size_of(E), align_of(E), 0, 0, allocator, loc)
return
}
@(builtin, require_results)
make_dynamic_array_len :: proc($T: typeid/[dynamic]$E, #any_int len: int, allocator := context.allocator, loc := #caller_location) -> (array: T, err: Allocator_Error) #optional_allocator_error {
err = _make_dynamic_array_len_cap((^Raw_Dynamic_Array)(&array), size_of(E), align_of(E), len, len, allocator, loc)
return
}
@(builtin, require_results)
make_dynamic_array_len_cap :: proc($T: typeid/[dynamic]$E, #any_int len: int, #any_int cap: int, allocator := context.allocator, loc := #caller_location) -> (array: T, err: Allocator_Error) #optional_allocator_error {
err = _make_dynamic_array_len_cap((^Raw_Dynamic_Array)(&array), size_of(E), align_of(E), len, cap, allocator, loc)
return
}
@(require_results)
_make_dynamic_array_len_cap :: proc(array: ^Raw_Dynamic_Array, size_of_elem, align_of_elem: int, #any_int len: int, #any_int cap: int, allocator := context.allocator, loc := #caller_location) -> (err: Allocator_Error) {
make_dynamic_array_error_loc(loc, len, cap)
array.allocator = allocator // initialize allocator before just in case it fails to allocate any memory
data := mem_alloc_bytes(size_of_elem*cap, align_of_elem, allocator, loc) or_return
use_zero := data == nil && size_of_elem != 0
array.data = raw_data(data)
array.len = 0 if use_zero else len
array.cap = 0 if use_zero else cap
array.allocator = allocator
return
}
map
-
Initializes a map with an allocator.
@(builtin, require_results)
make_map :: proc($T: typeid/map[$K]$E, allocator := context.allocator, loc := #caller_location) -> (m: T) {
m.allocator = allocator
return m
}
@(builtin, require_results)
make_map_cap :: proc($T: typeid/map[$K]$E, #any_int capacity: int, allocator := context.allocator, loc := #caller_location) -> (m: T, err: Allocator_Error) #optional_allocator_error {
make_map_expr_error_loc(loc, capacity)
context.allocator = allocator
err = reserve_map(&m, capacity, loc)
return
}
Multi-pointer
-
Allocates and initializes a multi-pointer.
@(builtin, require_results)
make_multi_pointer :: proc($T: typeid/[^]$E, #any_int len: int, allocator := context.allocator, loc := #caller_location) -> (mp: T, err: Allocator_Error) #optional_allocator_error {
make_slice_error_loc(loc, len)
data := mem_alloc_bytes(size_of(E)*len, align_of(E), allocator, loc) or_return
if data == nil && size_of(E) != 0 {
return
}
mp = cast(T)raw_data(data)
return
}
Deletes
-
Free a group of objects (opposite of
make) -
Deletes the backing memory of a value allocated with make or a string that was allocated through an allocator.
-
Will try to free the underlying data of the passed built-in data structure (string, cstring, dynamic array, slice, or map), with the given
allocatorif the allocator supports this operation. -
base:builtin
@builtin
delete :: proc{
delete_string,
delete_cstring,
delete_dynamic_array,
delete_slice,
delete_map,
delete_soa_slice,
delete_soa_dynamic_array,
delete_string16,
delete_cstring16,
}
-
Recursiveness :
-
deleteisn't recursive. It has no way of knowing whether you actually want to delete the contents or not--you may not always.
array_args_as_bytes: [dynamic][]u8 // Option 1: Don't delete everything. defer delete(array_args_as_bytes) // Option 2: Delete everything. defer { for arg in array_args_as_bytes { delete(arg) } delete(array_args_as_bytes) }-
If it's a struct, it's not uncommon to make a
destroy_structproc that does this for you.-
Example:
json.destroy_value.
-
-
The way I understand it is that the
dataof the object is deleted. Thedataitself is a pointer to where the data is stored, so deleting thedatais deleting the pointer.
-
string
@builtin
delete_string :: proc(str: string, allocator := context.allocator, loc := #caller_location) -> Allocator_Error {
return mem_free_with_size(raw_data(str), len(str), allocator, loc)
}
cstring
@builtin
delete_cstring :: proc(str: cstring, allocator := context.allocator, loc := #caller_location) -> Allocator_Error {
return mem_free((^byte)(str), allocator, loc)
}
string16
@builtin
delete_string16 :: proc(str: string16, allocator := context.allocator, loc := #caller_location) -> Allocator_Error {
return mem_free_with_size(raw_data(str), len(str)*size_of(u16), allocator, loc)
}
cstring16
@builtin
delete_cstring16 :: proc(str: cstring16, allocator := context.allocator, loc := #caller_location) -> Allocator_Error {
return mem_free((^u16)(str), allocator, loc)
}
dynamic array
@builtin
delete_dynamic_array :: proc(array: $T/[dynamic]$E, loc := #caller_location) -> Allocator_Error {
return mem_free_with_size(raw_data(array), cap(array)*size_of(E), array.allocator, loc)
}
slice
@builtin
delete_slice :: proc(array: $T/[]$E, allocator := context.allocator, loc := #caller_location) -> Allocator_Error {
return mem_free_with_size(raw_data(array), len(array)*size_of(E), allocator, loc)
}
Map
@builtin
delete_map :: proc(m: $T/map[$K]$V, loc := #caller_location) -> Allocator_Error {
return map_free_dynamic(transmute(Raw_Map)m, map_info(T), loc)
}
Mem Resize
-
base:runtime
_mem_resize :: #force_no_inline proc(
ptr: rawptr,
old_size,
new_size: int,
alignment: int = DEFAULT_ALIGNMENT,
allocator := context.allocator,
should_zero: bool,
loc := #caller_location
) -> (data: []byte, err: Allocator_Error) {
assert(is_power_of_two_int(alignment), "Alignment must be a power of two", loc)
if allocator.procedure == nil {
return nil, nil
}
if new_size == 0 {
if ptr != nil {
_, err = allocator.procedure(allocator.data, .Free, 0, 0, ptr, old_size, loc)
return
}
return
} else if ptr == nil {
if should_zero {
return allocator.procedure(allocator.data, .Alloc, new_size, alignment, nil, 0, loc)
} else {
return allocator.procedure(allocator.data, .Alloc_Non_Zeroed, new_size, alignment, nil, 0, loc)
}
} else if old_size == new_size && uintptr(ptr) % uintptr(alignment) == 0 {
data = ([^]byte)(ptr)[:old_size]
return
}
if should_zero {
data, err = allocator.procedure(allocator.data, .Resize, new_size, alignment, ptr, old_size, loc)
} else {
data, err = allocator.procedure(allocator.data, .Resize_Non_Zeroed, new_size, alignment, ptr, old_size, loc)
}
if err == .Mode_Not_Implemented {
if should_zero {
data, err = allocator.procedure(allocator.data, .Alloc, new_size, alignment, nil, 0, loc)
} else {
data, err = allocator.procedure(allocator.data, .Alloc_Non_Zeroed, new_size, alignment, nil, 0, loc)
}
if err != nil {
return
}
copy(data, ([^]byte)(ptr)[:old_size])
_, err = allocator.procedure(allocator.data, .Free, 0, 0, ptr, old_size, loc)
}
return
}
mem_resize :: proc(
ptr: rawptr,
old_size,
new_size: int,
alignment: int = DEFAULT_ALIGNMENT,
allocator := context.allocator,
loc := #caller_location
) -> (data: []byte, err: Allocator_Error) {
assert(is_power_of_two_int(alignment), "Alignment must be a power of two", loc)
return _mem_resize(ptr, old_size, new_size, alignment, allocator, true, loc)
}
non_zero_mem_resize :: proc(
ptr: rawptr,
old_size,
new_size: int,
alignment: int = DEFAULT_ALIGNMENT,
allocator := context.allocator,
loc := #caller_location
) -> (data: []byte, err: Allocator_Error) {
assert(is_power_of_two_int(alignment), "Alignment must be a power of two", loc)
return _mem_resize(ptr, old_size, new_size, alignment, allocator, false, loc)
}
Mem Set
-
Set a number of bytes (
len) to a value (val), from the address specified (ptr).
Using the 'C Runtime Library' (CRT)
-
base:runtime
when ODIN_NO_CRT == true && ODIN_OS == .Windows {
@(link_name="memset", linkage="strong", require)
memset :: proc "c" (ptr: rawptr, val: i32, len: int) -> rawptr {
RtlFillMemory(ptr, len, val)
return ptr
}
} else when ODIN_NO_CRT || (ODIN_OS != .Orca && (ODIN_ARCH == .wasm32 || ODIN_ARCH == .wasm64p32)) {
@(link_name="memset", linkage="strong", require)
memset :: proc "c" (ptr: rawptr, val: i32, #any_int len: int_t) -> rawptr {
if ptr != nil && len != 0 {
b := byte(val)
p := ([^]byte)(ptr)
for i := int_t(0); i < len; i += 1 {
p[i] = b
}
}
return ptr
}
} else {
memset :: proc "c" (ptr: rawptr, val: i32, len: int) -> rawptr {
if ptr != nil && len != 0 {
b := byte(val)
p := ([^]byte)(ptr)
for i := 0; i < len; i += 1 {
p[i] = b
}
}
return ptr
}
}
In C
Mem Copy
Which one to use
-
TLDR :
-
Barinzaya / Tetralux / Yawning:
-
Use
copy. -
The difference in performance is going to be pretty small between any of them. Anything
non_overlappingis a slight optimization at most if you know for sure it won't overlap. If it does, it may completely wreck your data. -
Use
intrinsics.mem_copy_non_overlappingor other option if you profile and seecopyto be an issue.
-
-
-
copy-
For convenience and safety, but slower.
-
-
intrinsics.mem_copy_non_overlapping-
For speed and no safety.
-
-
runtime.copy/runtime.copy_non_overlapping-
A middle ground between the two above, I guess,
-
-
mem.copy/mem.copy_non_overlapping-
Just a indirection from
intrinsics.mem_copy/intrinsics.mem_copy_non_overlapping. -
mem.copyis a tiny wrapper that will almost certainly end up inlined with any optimization on.
-
-
core:c/libc-
Ignore this one, is just there for completeness.
-
Equivalence to C's
-
mem_copy-
Similar to C's
memmove. -
Requires a little bit of additional logic to correctly handle the ranges overlapping.
-
-
mem_copy_non_overlapping-
Similar to C's
memcopy.
-
Using
intrinsics
-
Barinzaya:
-
The
intrinsicis handled by the compiler. It does a bit of additional "smart" stuff--if the length is constant, it emits the instructions to do the copy inline (without a call), and it just tells LLVM to do thememcpy/memmove. LLVM may in fact just call thememcpy/memmoveproc (provided by the CRT orprocs.odin), if it sees fit. -
But it still allows LLVM to be a little "smarter" about it, AFAIK. Since it knows what the proc does, it can potentially elide the copy (though probably less so in the case where the length is variable).
-
Every available copy procedure uses
intrinsics.mem_copyorintrinsics.mem_copy_non_overlappingunder the hood, so therefore, all those implementations benefit from possible compiler optimizations.
-
-
base:runtime-
Builtin.
-
Slice / Strings.
-
Copies elements from a source slice/string
srcto a destination slicedst. -
The source and destination may overlap. Copy returns the number of elements copied, which will be the minimum of
len(src)andlen(dst).
@(require_results) copy_slice_raw :: proc "contextless" (dst, src: rawptr, dst_len, src_len, elem_size: int) -> int { n := min(dst_len, src_len) if n > 0 { intrinsics.mem_copy(dst, src, n*elem_size) } return n } @builtin copy_slice :: #force_inline proc "contextless" (dst, src: $T/[]$E) -> int { return copy_slice_raw(raw_data(dst), raw_data(src), len(dst), len(src), size_of(E)) } @builtin copy_from_string :: #force_inline proc "contextless" (dst: $T/[]$E/u8, src: $S/string) -> int { return copy_slice_raw(raw_data(dst), raw_data(src), len(dst), len(src), 1) } @builtin copy :: proc{copy_slice, copy_from_string, copy_from_string16} -
-
base:runtime-
General.
mem_copy :: proc "contextless" (dst, src: rawptr, len: int) -> rawptr { if src != nil && dst != src && len > 0 { // NOTE(bill): This _must_ be implemented like C's memmove intrinsics.mem_copy(dst, src, len) } return dst } mem_copy_non_overlapping :: proc "contextless" (dst, src: rawptr, len: int) -> rawptr { if src != nil && dst != src && len > 0 { // NOTE(bill): This _must_ be implemented like C's memcpy intrinsics.mem_copy_non_overlapping(dst, src, len) } return dst } -
-
core:memcopy :: proc "contextless" (dst, src: rawptr, len: int) -> rawptr { intrinsics.mem_copy(dst, src, len) return dst } copy_non_overlapping :: proc "contextless" (dst, src: rawptr, len: int) -> rawptr { intrinsics.mem_copy_non_overlapping(dst, src, len) return dst } -
base:intrinsicsmem_copy :: proc(dst, src: rawptr, len: int) --- mem_copy_non_overlapping :: proc(dst, src: rawptr, len: int) ---
Using
core:c/libc
-
Barinzaya:
-
It's just procs from libc--part of which is the CRT. So the
libcone is explicitly the CRT implementation.
-
-
core:c/libc
memcpy :: proc(s1, s2: rawptr, n: size_t) -> rawptr ---
memmove :: proc(s1, s2: rawptr, n: size_t) -> rawptr ---
strcpy :: proc(s1: [^]char, s2: cstring) -> [^]char ---
strncpy :: proc(s1: [^]char, s2: cstring, n: size_t) -> [^]char ---
Implementation from 'C Runtime Library' (CRT)
-
ODIN_NO_CRT.-
trueif the-no-crtcommand line switch is passed, which inhibits linking with the C Runtime Library, a.k.a. LibC. -
The default is
false, so CRT is used.
-
-
Should I enabled CRT or not? I forgot to ask that, oops
-
Barinzaya:
-
memcpyandmemmoveare part of the C run-time, and LLVM needs to have them . If you disable the CRT, then they need to be provided--hence, why they're inprocs.odin. Note that they're inwhen ODIN_NO_CRTblocks (plus other conditions). So theprocs.odinimplementation is used when the CRT isn't linked, because they need to exist
-
-
Caio:
-
Can I say that
procs.odinprovides an implementation forintrinsicscopy procedures, considering the conditions defined in theprocs.odin? As a fallback I mean, I assumeintrinsicsalready have an implementation somewhere.
-
-
Barinzaya:
-
Not entirely,
procs.odinis more "stuff needed for LLVM to work at all when the CRT isn't included" -
intrinsicsare all implemented in the compiler itself. In the case of thecopys, they defer to LLVM intrinsics, which may callmemcpy/memmovefrom the CRT orprocs.odin--but they also may not -
Also, LLVM can call
memcpy/memmovewithout those intrinsics too, for sufficiently large copies.
-
-
Tetralux:
-
Intrinsics are more "compiler hooks" for "I want to do this thing please"
-
They are somewhat opaque things if you see what I mean
-
-
base:runtime
when ODIN_NO_CRT == true && ODIN_OS == .Windows {
@(link_name="memcpy", linkage="strong", require)
memcpy :: proc "c" (dst, src: rawptr, len: int) -> rawptr {
RtlMoveMemory(dst, src, len)
return dst
}
@(link_name="memmove", linkage="strong", require)
memmove :: proc "c" (dst, src: rawptr, len: int) -> rawptr {
RtlMoveMemory(dst, src, len)
return dst
}
} else when ODIN_NO_CRT || (ODIN_OS != .Orca && (ODIN_ARCH == .wasm32 || ODIN_ARCH == .wasm64p32)) {
@(link_name="memcpy", linkage="strong", require)
memcpy :: proc "c" (dst, src: rawptr, #any_int len: int_t) -> rawptr {
d, s := ([^]byte)(dst), ([^]byte)(src)
if d != s {
for i := int_t(0); i < len; i += 1 {
d[i] = s[i]
}
}
return d
}
@(link_name="memmove", linkage="strong", require)
memmove :: proc "c" (dst, src: rawptr, #any_int len: int_t) -> rawptr {
d, s := ([^]byte)(dst), ([^]byte)(src)
if d == s || len == 0 {
return dst
}
if d > s && uintptr(d)-uintptr(s) < uintptr(len) {
for i := len-1; i >= 0; i -= 1 {
d[i] = s[i]
}
return dst
}
if s > d && uintptr(s)-uintptr(d) < uintptr(len) {
for i := int_t(0); i < len; i += 1 {
d[i] = s[i]
}
return dst
}
return memcpy(dst, src, len)
}
} else {
// None.
}
In C
Mem Zero
Using
intrinsics
-
base:runtime
mem_zero :: proc "contextless" (data: rawptr, len: int) -> rawptr {
if data == nil {
return nil
}
if len <= 0 {
return data
}
intrinsics.mem_zero(data, len)
return data
}
-
base:intrinsics
mem_zero :: proc(ptr: rawptr, len: int) ---
mem_zero_volatile :: proc(ptr: rawptr, len: int) ---
Conditionally Mem Zero
-
When acquiring memory from the OS for the first time it's likely that the OS already gives the zero page mapped multiple times for the request. The actual allocation does not have physical pages allocated to it until those pages are written to which causes a page-fault. This is often called COW (Copy on Write) .
-
You do not want to actually zero out memory in this case because it would cause a bunch of page faults decreasing the speed of allocations and increase the amount of actual resident physical memory used.
-
Instead a better technique is to check if memory is zerored before zeroing it. This turns out to be an important optimization in practice, saving nearly half (or more) the amount of physical memory used by an application.
-
This is why every implementation of
callocinlibcdoes this optimization. -
It may seem counter-intuitive but most allocations in an application are wasted and never used. When you consider something like a
[dynamic]Twhich always doubles in capacity on resize but you rarely ever actually use the full capacity of a dynamic array it means you have a lot of resident waste if you actually zeroed the remainder of the memory. -
Keep in mind the OS is already guaranteed to give you zeroed memory by mapping in this zero page multiple times so in the best case there is no need to actually zero anything. As for testing all this memory for a zero value, it costs nothing because the the same zero page is used for the whole allocation and will exist in L1 cache for the entire zero checking process.
-
base:runtime
conditional_mem_zero :: proc "contextless" (data: rawptr, n_: int) #no_bounds_check {
if n_ <= 0 {
return
}
n := uint(n_)
n_words := n / size_of(uintptr)
p_words := ([^]uintptr)(data)[:n_words]
p_bytes := ([^]byte)(data)[size_of(uintptr) * n_words:n]
for &p_word in p_words {
if p_word != 0 {
p_word = 0
}
}
for &p_byte in p_bytes {
if p_byte != 0 {
p_byte = 0
}
}
}
Using the 'C Runtime Library' (CRT)
when ODIN_NO_CRT && ODIN_OS == .Windows {
// None
} else when ODIN_NO_CRT || (ODIN_OS != .Orca && (ODIN_ARCH == .wasm32 || ODIN_ARCH == .wasm64p32)) {
@(link_name="bzero", linkage="strong", require)
bzero :: proc "c" (ptr: rawptr, #any_int len: int_t) -> rawptr {
if ptr != nil && len != 0 {
p := ([^]byte)(ptr)
for i := int_t(0); i < len; i += 1 {
p[i] = 0
}
}
return ptr
}
} else {
// None
}
In C
Resize
_mem_resize :: #force_no_inline proc(ptr: rawptr, old_size, new_size: int, alignment: int = DEFAULT_ALIGNMENT, allocator := context.allocator, should_zero: bool, loc := #caller_location) -> (data: []byte, err: Allocator_Error) {
assert(is_power_of_two_int(alignment), "Alignment must be a power of two", loc)
if allocator.procedure == nil {
return nil, nil
}
if new_size == 0 {
if ptr != nil {
_, err = allocator.procedure(allocator.data, .Free, 0, 0, ptr, old_size, loc)
return
}
return
} else if ptr == nil {
if should_zero {
return allocator.procedure(allocator.data, .Alloc, new_size, alignment, nil, 0, loc)
} else {
return allocator.procedure(allocator.data, .Alloc_Non_Zeroed, new_size, alignment, nil, 0, loc)
}
} else if old_size == new_size && uintptr(ptr) % uintptr(alignment) == 0 {
data = ([^]byte)(ptr)[:old_size]
return
}
if should_zero {
data, err = allocator.procedure(allocator.data, .Resize, new_size, alignment, ptr, old_size, loc)
} else {
data, err = allocator.procedure(allocator.data, .Resize_Non_Zeroed, new_size, alignment, ptr, old_size, loc)
}
if err == .Mode_Not_Implemented {
if should_zero {
data, err = allocator.procedure(allocator.data, .Alloc, new_size, alignment, nil, 0, loc)
} else {
data, err = allocator.procedure(allocator.data, .Alloc_Non_Zeroed, new_size, alignment, nil, 0, loc)
}
if err != nil {
return
}
copy(data, ([^]byte)(ptr)[:old_size])
_, err = allocator.procedure(allocator.data, .Free, 0, 0, ptr, old_size, loc)
}
return
}
mem_resize :: proc(ptr: rawptr, old_size, new_size: int, alignment: int = DEFAULT_ALIGNMENT, allocator := context.allocator, loc := #caller_location) -> (data: []byte, err: Allocator_Error) {
assert(is_power_of_two_int(alignment), "Alignment must be a power of two", loc)
return _mem_resize(ptr, old_size, new_size, alignment, allocator, true, loc)
}
non_zero_mem_resize :: proc(ptr: rawptr, old_size, new_size: int, alignment: int = DEFAULT_ALIGNMENT, allocator := context.allocator, loc := #caller_location) -> (data: []byte, err: Allocator_Error) {
assert(is_power_of_two_int(alignment), "Alignment must be a power of two", loc)
return _mem_resize(ptr, old_size, new_size, alignment, allocator, false, loc)
}
Default resize procedure
-
When allocator does not support resize operation, but supports
.Alloc/.Alloc_Non_Zeroedand.Free, this procedure is used to implement allocator's default behavior on resize. -
The behavior of the function is as follows:
-
If
new_sizeis0, the function acts likefree(), freeing the memory region specified byold_data. -
If
old_dataisnil, the function acts likealloc(), allocatingnew_sizebytes of memory aligned on a boundary specified byalignment. -
Otherwise, a new memory region of size
new_sizeis allocated, then the data from the old memory region is copied and the old memory region is freed.
-
@(require_results)
_default_resize_bytes_align :: #force_inline proc(
old_data: []byte,
new_size: int,
alignment: int,
should_zero: bool,
allocator := context.allocator,
loc := #caller_location,
) -> ([]byte, Allocator_Error) {
old_memory := raw_data(old_data)
old_size := len(old_data)
if old_memory == nil {
if should_zero {
return alloc_bytes(new_size, alignment, allocator, loc)
} else {
return alloc_bytes_non_zeroed(new_size, alignment, allocator, loc)
}
}
if new_size == 0 {
err := free_bytes(old_data, allocator, loc)
return nil, err
}
if new_size == old_size && is_aligned(old_memory, alignment) {
return old_data, .None
}
new_memory : []byte
err : Allocator_Error
if should_zero {
new_memory, err = alloc_bytes(new_size, alignment, allocator, loc)
} else {
new_memory, err = alloc_bytes_non_zeroed(new_size, alignment, allocator, loc)
}
if new_memory == nil || err != nil {
return nil, err
}
runtime.copy(new_memory, old_data)
free_bytes(old_data, allocator, loc)
return new_memory, err
}
Entry Point
-
Check
runtime/entry_unix.odin/runtime/entry_windows.odin/ etc. -
For unix NO_CRT,
runtime/entry_unix_no_crt_X.asmruns before even calling the_start_odin(). -
Unix example:
@(link_name="main", linkage="strong", require)
main :: proc "c" (argc: i32, argv: [^]cstring) -> i32 {
args__ = argv[:argc]
context = default_context()
#force_no_inline _startup_runtime()
intrinsics.__entry_point()
#force_no_inline _cleanup_runtime()
return 0
}
-
The arguments are passed by the C runtime library (libc), to then be stored a global variable for use by the other modules.
// IMPORTANT NOTE(bill): Do not call this unless you want to explicitly set up the entry point and how it gets called
// This is probably only useful for freestanding targets
foreign {
@(link_name="__$startup_runtime")
_startup_runtime :: proc "odin" () ---
@(link_name="__$cleanup_runtime")
_cleanup_runtime :: proc "odin" () ---
}
-
_startup_runtime-
Initializes some global variables and calls
@(init)functions in the code.
-
-
_cleanup_runtime-
Calls
@(fini)functions. -
_cleanup_runtime_contextless-
Contextless variant, only called by
os.exit()from thecore:oslibrary (os1).
_cleanup_runtime_contextless :: proc "contextless" () { context = default_context() _cleanup_runtime() } -
-
-
@(init)-
This attribute may be applied to any procedure that neither takes any parameters nor returns any values. All suitable procedures marked in this way by
@(init)will then be called at the start of the program before main is called. -
The exact order in which all such intialization functions are called is deterministic and hence reliable. The order is determined by a topological sort of the import graph and then in alphabetical file order within the package and then top down within the file.
-
-
@(fini)-
Like
@(init)but run at after the main procedure finishes
-
-
@(entry_point_only)-
Marks a procedure that can be called within the entry point only
-
-
ODIN_NO_ENTRY_POINT-
true if the
-no-entry-pointcommand line switch is passed, which makes the declaration of a main procedure optional.
-
-
Writing an OS Kernel in Odin .
-
Cool.
-
Multi-Threading
-
Note : I'm still studying about Odin's implementation of multithreading, so the notes here are basically me organizing the content I found around the source code and core:sync .
-
Ginger Bill:
-
"Odin does have numerous threading and synchronization primitives in its core library. But it does not have any parallelism/concurrency features built directly into the language itself because all of them require some form of automatic memory management which is a no-go."
-
-
"Odin handles threads similarly to how Go handles it".
core:thread
thread.create
-
create. -
Tutorial .
-
Create a thread in a suspended state with the given priority.
-
This procedure creates a thread that will be set to run the procedure specified by the
procedureparameter with a specified priority. The returned thread will be in a suspended state untilstart()procedure is called.
Thread Pool
-
Via thread.pool
-
Via dynamic array :
-
Stores pointer to a thread.
-
Tutorial .
arr := []int{1,2,3} main :: proc() { threadPool := make([dynamic]^thread.Thread, 0, len(arr)) defer delete(threadPool) } -
Channels
core:sync/chan
-
-
The tutorial is useful.
-
-
Tutorial .
-
This package provides both high-level and low-level channel types for thread-safe communication.
-
While channels are essentially thread-safe queues under the hood, their primary purpose is to facilitate safe communication between multiple readers and multiple writers. Although they can be used like queues, channels are designed with synchronization and concurrent messaging patterns in mind.
-
Provided types :
-
Chana high-level channel. -
Raw_Chana low-level channel. -
Raw_Queuea low-level non-threadsafe queue implementation used internally.
-
CPU Yield
-
cpu_relax-
This procedure may lower CPU consumption or yield to a hyperthreaded twin processor.
-
It's exact function is architecture specific, but the intent is to say that you're not doing much on a CPU.
-
Synchronization Primitives: Direct Comparisons
Comparing
Sema
vs
Atomic_Sema
-
Semais just a wrapper around_Semaimplementations depending on the OS, but , as there's only one implementation of_Semain the wholesynclibrary,SemaandAtomic_Semaends up being the same. -
It's just an edge case for consistency.
-
Blob:
-
Once upon a time there was a Wait Group based Semaphore, which could be switched to with a flag. Ya, I'd image it's just keep as is for a consistency. #d5886c1
-
Comparing
Mutex
vs
Atomic_Mutex
-
For any other OS :
-
It doesn't matter.
MutexusesAtomic_Mutexdirectly. It acts like a direct wrapper.
-
-
For Windows :
-
.
-
Comparing
RW_Mutex
vs
Atomic_RW_Mutex
-
For any other OS :
-
It doesn't matter.
RW_MutexusesAtomic_RW_Mutexdirectly. It acts like a direct wrapper.
-
-
For Windows :
-
.
-
Comparing
Cond
vs
Atomic_Cond
-
For any other OS :
-
It doesn't matter.
CondusesAtomic_Conddirectly. It acts like a direct wrapper.
-
-
For Windows :
-
Which one to use for Windows?
-
By default lots of implementations from other synchronization primitives use
Cond, so I guess I should stay with that one for consistency? I don't know. The implementation fromntdllseems more troublesome thenwin32, based on what I saw.
-
-
Using
Cond:-
The
win32.SleepConditionVariableSRWwill be used.
SleepConditionVariableSRW :: proc(ConditionVariable: ^CONDITION_VARIABLE, SRWLock: ^SRWLOCK, dwMilliseconds: DWORD, Flags: LONG) -> BOOL ----
Is a Win32 API function that blocks a thread until a condition variable is signaled, while using an SRW lock as the associated synchronization object.
-
Provide a lightweight, efficient way for threads to wait for a condition to change without spinning. It is the higher-level Win32 analogue to Linux futex-style waits and internally uses the wait-on-address mechanism.
-
ConditionVariable-
The condition variable to wait on.
-
-
SRWLock-
A previously acquired SRW lock (in shared or exclusive mode).
-
-
dwMilliseconds-
Timeout in milliseconds, or
INFINITE.
-
-
Flags-
CONDITION_VARIABLE_LOCKMODE_SHAREDif the lock was acquired in shared mode; -
0if it was acquired in exclusive mode.
-
-
How it works:
-
The caller must already hold the SRW lock.
-
The function atomically unlocks the SRW lock and puts the thread to sleep on the condition variable.
-
When awakened by
WakeConditionVariableorWakeAllConditionVariable, it reacquires the SRW lock before returning. -
The caller must recheck the condition because wake-ups may be spurious.
-
-
-
Using
Atomic_Cond:-
The
Futeximplementation for Windows will be used instead, which usesatomic_cond_wait -> Ntdll.RtlWaitOnAddress. -
ntdll.dllis the lowest-level user-mode runtime library in Windows, providing the Native API and the gateway to kernel system calls.-
The NT system call interface
-
It provides the user-mode entry points for system calls (
Nt*andZw*functions). These functions are thin wrappers that transition into kernel mode.
-
-
The Windows Native API (undocumented or semi-documented)
-
This includes functions prefixed with
Rtl*,Ldr*,Nt*, etc. They cover low-level tasks such as process/thread start-up, memory management helpers, loader functionality, string utilities, and synchronization primitives.
-
-
Process bootstrapping code
-
Every user-mode process loads
ntdll.dllfirst. It sets up the runtime before the main module’s entry point runs.
-
-
Support for critical subsystems
-
Exception dispatching
-
Thread local storage internals
-
Heap internals (working with the kernel)
-
Loader and module management
-
Atomically waiting/waking primitives (like
RtlWaitOnAddress)
-
-
It is not meant for application-level use. Many of its functions are undocumented, can change between Windows releases, and may break compatibility.
-
It is not the same as
kernel32.dlloruser32.dll. Those are higher-level and officially documented; they themselves call intontdll.dll.
-
RtlWaitOnAddress :: proc(Address: rawptr, CompareAddress: rawptr, AddressSize: uint, Timeout: ^i64) -> i32 ----
Rtl (Run-time library) + WaitOnAddress → “run-time library: wait on (a) memory address.”
-
"block the calling thread until the memory at a specified address no longer matches a given value (or a timeout/interrupt occurs)".
-
Atomically compares the bytes at
AddressToWaitOnwith the bytes pointed to byCompareAddress(sizeAddressSize). -
If they are equal, the caller is put to sleep by the kernel until either the memory changes, a timeout/interrupt occurs, or a wake is issued.
-
If they are different on first check, it returns immediately.
-
Ginger Bill:
-
For some bizarre reason,
timeouthas to be a negative number. -
WaitOnAddressis implemented on top ofRtlWaitOnAddressBUT requires taking the return value of it and if it is non-zero converting that status to a DOS error and thenSetLastErrorIf this is not done, then things don't work as expected when an error occurs GODDAMN MICROSOFT!
-
-
-
Atomics
Memory Order
-
See Multithreading#Atomics .
Implicit Memory Order
-
Non-explicit atomics will always be sequentially consistent (
.Seq_Cst).
Explicit Memory Order
-
In Odin there are 5 different memory ordering guaranties that can be provided to an atomic operation:
Atomic_Memory_Order :: enum {
Relaxed = 0, // Unordered
Consume = 1, // Monotonic
Acquire = 2,
Release = 3,
Acq_Rel = 4,
Seq_Cst = 5,
}
Operations
-
Most of the procedure have a "normal" and
_explicitvariant. -
The "normal" variant will always have a memory order sequentially consistent (
.Seq_Cst). -
The "normal" variant will always have a memory order defined by the
orderparameter (Atomic_Memory_Order); unless specified differently.
Load / Store
-
atomic_store/atomic_store_explicit-
Atomically store a value into memory.
-
This procedure stores a value to a memory location in such a way that no other thread is able to see partial reads.
-
-
atomic_load/atomic_load_explicit-
Atomically load a value from memory.
-
This procedure loads a value from a memory location in such a way that the received value is not a partial read.
-
-
atomic_exchange/atomic_exchange_explicit-
Atomically exchange the value in a memory location, with the specified value.
-
This procedure loads a value from the specified memory location, and stores the specified value into that memory location. Then the loaded value is returned, all done in a single atomic operation.
-
This operation is an atomic equivalent of the following:
tmp := dst^ dst^ = val return tmp -
Compare-Exchange
-
atomic_compare_exchange_strong/atomic_compare_exchange_strong_explicit-
Atomically compare and exchange the value with a memory location.
-
This procedure checks if the value pointed to by the
dstparameter is equal toold, and if they are, it stores the valuenewinto the memory location, all done in a single atomic operation. This procedure returns the old value stored in a memory location and a boolean value signifying whetheroldwas equal tonew. -
This procedure is an atomic equivalent of the following operation:
old_dst := dst^ if old_dst == old { dst^ = new return old_dst, true } else { return old_dst, false }-
The strong version of compare exchange always returns true, when the returned old value stored in location pointed to by
dstand theoldparameter are equal. -
Atomic compare exchange has two memory orderings: One is for the read-modify-write operation, if the comparison succeeds, and the other is for the load operation, if the comparison fails.
-
For the non-explicit version: The memory ordering for both of of these operations is sequentially-consistent.
-
For the explicit version: The memory ordering for these operations is as specified by
successandfailureparameters respectively.
-
-
atomic_compare_exchange_weak/atomic_compare_exchange_weak_explicit-
Atomically compare and exchange the value with a memory location.
-
This procedure checks if the value pointed to by the
dstparameter is equal toold, and if they are, it stores the valuenewinto the memory location, all done in a single atomic operation. This procedure returns the old value stored in a memory location and a boolean value signifying whetheroldwas equal tonew. -
This procedure is an atomic equivalent of the following operation:
old_dst := dst^ if old_dst == old { // may return false here dst^ = new return old_dst, true } else { return old_dst, false }-
The weak version of compare exchange may return false, even if
dst^ == old. -
On some platforms running weak compare exchange in a loop is faster than a strong version.
-
Atomic compare exchange has two memory orderings: One is for the read-modify-write operation, if the comparison succeeds, and the other is for the load operation, if the comparison fails.
-
For the non-explicit version: The memory ordering for both of of these operations is sequentially-consistent.
-
For the explicit version: The memory ordering for these operations is as specified by
successandfailureparameters respectively.
-
Arithmetic
-
atomic_add/atomic_add_explicit-
Atomically add a value to the value stored in memory.
-
This procedure loads a value from memory, adds the specified value to it, and stores it back as an atomic operation.
-
This operation is an atomic equivalent of the following:
-
dst^ += val
-
-
atomic_sub/atomic_sub_explicit-
Atomically subtract a value from the value stored in memory.
-
This procedure loads a value from memory, subtracts the specified value from it, and stores the result back as an atomic operation.
-
This operation is an atomic equivalent of the following:
-
dst^ -= val
-
-
Logical
-
atomic_and/atomic_and_explicit-
Atomically replace the memory location with the result of AND operation with the specified value.
-
This procedure loads a value from memory, calculates the result of AND operation between the loaded value and the specified value, and stores it back into the same memory location as an atomic operation.
-
This operation is an atomic equivalent of the following:
-
dst^ &= val
-
-
-
atomic_nand/atomic_nand_explicit-
Atomically replace the memory location with the result of NAND operation with the specified value.
-
This procedure loads a value from memory, calculates the result of NAND operation between the loaded value and the specified value, and stores it back into the same memory location as an atomic operation.
-
This operation is an atomic equivalent of the following:
-
dst^ = ~(dst^ & val)
-
-
atomic_or/atomic_or_explicit-
Atomically replace the memory location with the result of OR operation with the specified value.
-
This procedure loads a value from memory, calculates the result of OR operation between the loaded value and the specified value, and stores it back into the same memory location as an atomic operation.
-
This operation is an atomic equivalent of the following:
-
dst^ |= val
-
-
-
-
atomic_xor/atomic_xor_explicit-
Atomically replace the memory location with the result of XOR operation with the specified value.
-
This procedure loads a value from memory, calculates the result of XOR operation between the loaded value and the specified value, and stores it back into the same memory location as an atomic operation.
-
This operation is an atomic equivalent of the following:
-
dst^ ~= val
-
Ordering
-
atomic_thread_fence-
Establish memory ordering.
-
This procedure establishes memory ordering, without an associated atomic operation.
-
-
atomic_signal_fence-
Establish memory ordering between a current thread and a signal handler.
-
This procedure establishes memory ordering between a thread and a signal handler, that run on the same thread, without an associated atomic operation.
-
This procedure is equivalent to
atomic_thread_fence, except it doesn't issue any CPU instructions for memory ordering.
-
Barrier (
sync.Barrier
)
Cond :: struct {
impl: _Cond,
}
Mutex :: struct {
impl: _Mutex,
}
Barrier :: struct {
mutex: Mutex,
cond: Cond,
index: int,
generation_id: int,
thread_count: int,
}
-
For any other OS:
Futex :: distinct u32
Atomic_Cond :: struct {
state: Futex,
}
_Cond :: struct {
cond: Atomic_Cond,
}
Atomic_Mutex_State :: enum Futex {
Unlocked = 0,
Locked = 1,
Waiting = 2,
}
Atomic_Mutex :: struct {
state: Atomic_Mutex_State,
}
_Mutex :: struct {
mutex: Atomic_Mutex,
}
-
For Windows:
LPVOID :: rawptr
CONDITION_VARIABLE :: struct {
ptr: LPVOID,
}
_Cond :: struct {
cond: win32.CONDITION_VARIABLE,
}
SRWLOCK :: struct {
ptr: LPVOID,
}
_Mutex :: struct {
srwlock: win32.SRWLOCK,
}
-
See Multithreading#Barrier .
Example
THREAD_COUNT :: 4
threads: [THREAD_COUNT]^thread.Thread
sync.barrier_init(barrier, THREAD_COUNT)
for _, i in threads {
threads[i] = thread.create_and_start(proc(t: ^thread.Thread) {
// Same messages will be printed together but without any interleaving
fmt.println("Getting ready!")
sync.barrier_wait(barrier)
fmt.println("Off their marks they go!")
})
}
for t in threads {
thread.destroy(t)
}
Usage
-
-
Initializes the barrier for the specified amount of participant threads.
barrier_init :: proc "contextless" (b: ^Barrier, thread_count: int) { when ODIN_VALGRIND_SUPPORT { vg.helgrind_barrier_resize_pre(b, uint(thread_count)) } b.index = 0 b.generation_id = 0 b.thread_count = thread_count } -
-
-
Blocks the execution of the current thread, until all threads have reached the same point in the execution of the thread proc.
barrier_wait :: proc "contextless" (b: ^Barrier) -> (is_leader: bool) { when ODIN_VALGRIND_SUPPORT { vg.helgrind_barrier_wait_pre(b) } guard(&b.mutex) local_gen := b.generation_id b.index += 1 if b.index < b.thread_count { for local_gen == b.generation_id && b.index < b.thread_count { cond_wait(&b.cond, &b.mutex) } return false } b.index = 0 b.generation_id += 1 cond_broadcast(&b.cond) return true } -
Semaphore (
sync.Sema
)
Futex :: distinct u32
Atomic_Sema :: struct {
count: Futex,
}
_Sema :: struct {
atomic: Atomic_Sema,
}
Sema :: struct {
impl: _Sema,
}
-
See Multithreading#Semaphore .
-
Note : A semaphore must not be copied after first use (e.g., after posting to it). This is because, in order to coordinate with other threads, all threads must watch the same memory address to know when the lock has been released. Trying to use a copy of the lock at a different memory address will result in broken and unsafe behavior. For this reason, semaphores are marked as
#no_copy.
Usage
-
I'm not sure how to use this.
-
-
Increment the internal counter on a semaphore by the specified amount.
-
If any of the threads were waiting on the semaphore, up to
countof threads will continue the execution and enter the critical section. -
Internally it's just an
atomic_add_explicit+futex_signal/futex_broadcast.
atomic_sema_post :: proc "contextless" (s: ^Atomic_Sema, count := 1) { atomic_add_explicit(&s.count, Futex(count), .Release) if count == 1 { futex_signal(&s.count) } else { futex_broadcast(&s.count) } } _sema_post :: proc "contextless" (s: ^Sema, count := 1) { when ODIN_VALGRIND_SUPPORT { vg.helgrind_sem_post_pre(s) } atomic_sema_post(&s.impl.atomic, count) } sema_post :: proc "contextless" (s: ^Sema, count := 1) { _sema_post(s, count) } -
-
Wait on a semaphore until the internal counter is non-zero.
-
This procedure blocks the execution of the current thread, until the semaphore counter is non-zero, and atomically decrements it by one, once the wait has ended.
-
Internally it's just an
atomic_load_explicit+futex_wait+atomic_compare_exchange_strong_explicit.
atomic_sema_wait :: proc "contextless" (s: ^Atomic_Sema) { for { original_count := atomic_load_explicit(&s.count, .Relaxed) for original_count == 0 { futex_wait(&s.count, u32(original_count)) original_count = atomic_load_explicit(&s.count, .Relaxed) } if original_count == atomic_compare_exchange_strong_explicit(&s.count, original_count, original_count-1, .Acquire, .Acquire) { return } } } _sema_wait :: proc "contextless" (s: ^Sema) { atomic_sema_wait(&s.impl.atomic) when ODIN_VALGRIND_SUPPORT { vg.helgrind_sem_wait_post(s) } } sema_wait :: proc "contextless" (s: ^Sema) { _sema_wait(s) }
Benaphore (
sync.Benaphore
)
Futex :: distinct u32
Atomic_Sema :: struct {
count: Futex,
}
_Sema :: struct {
atomic: Atomic_Sema,
}
Sema :: struct {
impl: _Sema,
}
Benaphore :: struct {
counter: i32,
sema: Sema,
}
-
See Multithreading#Benaphore .
Usage
-
Seems like a Mutex + Semaphore combined?
-
-
Acquire a lock on a benaphore. If the lock on a benaphore is already held, this procedure also blocks the execution of the current thread, until the lock could be acquired.
-
Once a lock is acquired, all threads attempting to take a lock will be blocked from entering any critical sections associated with the same benaphore, until until the lock is released.
benaphore_lock :: proc "contextless" (b: ^Benaphore) { if atomic_add_explicit(&b.counter, 1, .Acquire) > 0 { sema_wait(&b.sema) } } -
-
Release a lock on a benaphore. If any of the threads are waiting on the lock, exactly one thread is allowed into a critical section associated with the same benaphore.
benaphore_unlock :: proc "contextless" (b: ^Benaphore) { if atomic_sub_explicit(&b.counter, 1, .Release) > 1 { sema_post(&b.sema) } }
Recursive Benaphore (
sync.Recursive_Benaphore
)
Futex :: distinct u32
Atomic_Sema :: struct {
count: Futex,
}
_Sema :: struct {
atomic: Atomic_Sema,
}
Sema :: struct {
impl: _Sema,
}
Recursive_Benaphore :: struct {
counter: int,
owner: int,
recursion: i32,
sema: Sema,
}
See Multithreading#Recursive Benaphore .
Usage
-
-
Acquire a lock on a recursive benaphore. If the benaphore is held by another thread, this function blocks until the lock can be acquired.
-
Once a lock is acquired, all other threads attempting to acquire a lock will be blocked from entering any critical sections associated with the same recursive benaphore, until the lock is released.
recursive_benaphore_lock :: proc "contextless" (b: ^Recursive_Benaphore) { tid := current_thread_id() check_owner: if tid != atomic_load_explicit(&b.owner, .Acquire) { atomic_add_explicit(&b.counter, 1, .Relaxed) if _, ok := atomic_compare_exchange_strong_explicit(&b.owner, 0, tid, .Release, .Relaxed); ok { break check_owner } sema_wait(&b.sema) atomic_store_explicit(&b.owner, tid, .Release) } // inside the lock b.recursion += 1 } -
-
Release a lock on a recursive benaphore. It also causes the critical sections associated with the same benaphore, to become open for other threads for entering.
recursive_benaphore_unlock :: proc "contextless" (b: ^Recursive_Benaphore) { tid := current_thread_id() assert_contextless(tid == atomic_load_explicit(&b.owner, .Relaxed), "tid != b.owner") b.recursion -= 1 recursion := b.recursion if recursion == 0 { if atomic_sub_explicit(&b.counter, 1, .Relaxed) == 1 { atomic_store_explicit(&b.owner, 0, .Release) } else { sema_post(&b.sema) } } // outside the lock }
Auto Reset Event (
sync.Auto_Reset_Event
)
Auto_Reset_Event :: struct {
status: i32,
sema: Sema,
}
Usage
-
Status :
-
status == 0: Event is reset and no threads are waiting -
status == 1: Event is signalled -
status == -´N: Event is reset and N threads are waiting
-
Mutex (
sync.Mutex
)
Mutex :: struct {
impl: _Mutex,
}
-
For any other OS:
Atomic_Mutex_State :: enum Futex {
Unlocked = 0,
Locked = 1,
Waiting = 2,
}
Atomic_Mutex :: struct {
state: Atomic_Mutex_State,
}
_Mutex :: struct {
mutex: Atomic_Mutex,
}
-
For Windows:
LPVOID :: rawptr
SRWLOCK :: struct {
ptr: LPVOID,
}
_Mutex :: struct {
srwlock: win32.SRWLOCK,
}
-
Note : A Mutex must not be copied after first use (e.g., after locking it the first time). This is because, in order to coordinate with other threads, all threads must watch the same memory address to know when the lock has been released. Trying to use a copy of the lock at a different memory address will result in broken and unsafe behavior. For this reason, Mutexes are marked as
#no_copy. -
Note : If the current thread attempts to lock a mutex, while it's already holding another lock, that will cause a trivial case of deadlock. Do not use
Mutexin recursive functions. In case multiple locks by the same thread are desired, useRecursive_Mutex.
Usage
-
-
Returns
trueif success,falseif failure.
-
-
-
Scope
lock+unlock.
-
-
-
Wait until the condition variable is signalled and release the associated mutex.
-
This procedure blocks the current thread until the specified condition variable is signalled, or until a spurious wakeup occurs. In addition, if the condition has been signalled, this procedure releases the lock on the specified mutex.
-
The mutex must be held by the calling thread, before calling the procedure.
-
Note : This procedure can return on a spurious wake-up, even if the condition variable was not signalled by a thread.
Futex (
sync.Futex
)
Futex :: distinct u32
-
Uses a pointer to a 32-bit value as an identifier of the queue of waiting threads. The value pointed to by that pointer can be used to store extra data.
-
IMPORTANT : A futex must not be copied after first use (e.g., after waiting on it the first time, or signalling it). This is because, in order to coordinate with other threads, all threads must watch the same memory address. Trying to use a copy of the lock at a different memory address will result in broken and unsafe behavior.
Usage
-
The implementations of the functions are heavy OS-dependent.
-
-
Notify one thread.
-
-
-
Notify all threads.
-
One Shot Event (
sync.One_Shot_Event
)
Futex :: distinct u32
One_Shot_Event :: struct {
state: Futex,
}
Usage
Parker (
sync.Parker
)
Futex :: distinct u32
Parker :: struct {
state: Futex,
}
-
See Multithreading#Parker .
Usage
Read-Write Mutex (
sync.RW_Mutex
) / (
sys_windows.SRWLock
)
RW_Mutex :: struct {
impl: _RW_Mutex,
}
-
For any other OS:
Futex :: distinct u32
Atomic_RW_Mutex_State :: distinct uint
Atomic_Mutex_State :: enum Futex {
Unlocked = 0,
Locked = 1,
Waiting = 2,
}
Atomic_Mutex :: struct {
state: Atomic_Mutex_State,
}
Atomic_Sema :: struct {
count: Futex,
}
Atomic_RW_Mutex :: struct {
state: Atomic_RW_Mutex_State,
mutex: Atomic_Mutex,
sema: Atomic_Sema,
}
_RW_Mutex :: struct {
mutex: Atomic_RW_Mutex,
}
-
For Windows:
LPVOID :: rawptr
SRWLOCK :: struct {
ptr: LPVOID,
}
_RW_Mutex :: struct {
srwlock: win32.SRWLOCK,
// The same as _Mutex for Windows.
}
-
Note : A read-write mutex must not be copied after first use (e.g., after acquiring a lock). This is because, in order to coordinate with other threads, all threads must watch the same memory address to know when the lock has been released. Trying to use a copy of the lock at a different memory address will result in broken and unsafe behavior. For this reason, mutexes are marked as
#no_copy. -
Note : A read-write mutex is not recursive. Do not attempt to acquire an exclusive lock more than once from the same thread, or an exclusive and shared lock on the same thread. Taking a shared lock multiple times is acceptable.
Usage
Once (
sync.Once
)
Once :: struct {
m: Mutex,
done: bool,
}
-
See Multithreading#Once .
Usage
-
-
once_do_without_data :: proc(o: ^Once, fn: proc()) { @(cold) do_slow :: proc(o: ^Once, fn: proc()) { guard(&o.m) if !o.done { fn() atomic_store_explicit(&o.done, true, .Release) } } if atomic_load_explicit(&o.done, .Acquire) == false { do_slow(o, fn) } } -
once_do_with_data :: proc(o: ^Once, fn: proc(data: rawptr), data: rawptr) { @(cold) do_slow :: proc(o: ^Once, fn: proc(data: rawptr), data: rawptr) { guard(&o.m) if !o.done { fn(data) atomic_store_explicit(&o.done, true, .Release) } } if atomic_load_explicit(&o.done, .Acquire) == false { do_slow(o, fn, data) } }
-
Ticket Mutex (
sync.Ticket_Mutex
)
Ticket_Mutex :: struct {
ticket: uint,
serving: uint,
}
Usage
Condition Variable (
sync.Cond
)
Cond :: struct {
impl: _Cond,
}
-
For any other OS:
Futex :: distinct u32
Atomic_Cond :: struct {
state: Futex,
}
_Cond :: struct {
cond: Atomic_Cond,
}
-
For Windows:
LPVOID :: rawptr
CONDITION_VARIABLE :: struct {
ptr: LPVOID,
}
_Cond :: struct {
cond: win32.CONDITION_VARIABLE,
}
-
Note : A condition variable must not be copied after first use (e.g., after waiting on it the first time). This is because, in order to coordinate with other threads, all threads must watch the same memory address to know when the lock has been released. Trying to use a copy of the lock at a different memory address will result in broken and unsafe behavior. For this reason, condition variables are marked as
#no_copy.
Usage
Wait Group (
sync.Wait_Group
)
Wait_Group :: struct {
counter: int,
mutex: Mutex,
cond: Cond,
}
-
Note : Just like any synchronization primitives, a wait group cannot be copied after first use.
Usage
Directives
-
#by_ptr-
For
const T *arguments in bindings.
-
-
#caller_location-
Barinzaya: It contains static strings, so AFAIK the
stringsin it should be null-terminated and able to be safely converted tocstrings.
-
FFI (Foreign Function Interface) / Bindings
-
-
"
proc "c"is__cdecl" -
"A
procsignature, when used as a type, is already a proc pointer" -
-
Helps understand RayLib bindings.
-
-
-
cstring. -
#by_ptr. -
-
"foreign import
.asmactually does nothing if your target is an object file."
-
Web Build
Not-WASM
WebUI
-
-
"Use any web browser as GUI, with Odin in the backend and modern web technologies in the frontend.".
-
WebUI .
-
WebUI's primary focus is using web browsers as GUI, but starting from v2.5, WebUI can also use WebView if you need to use WebView instead of a web browser.
-
Docs .
-
-
(2025-10-12)
-
.
-
This screenshot summarizes everything.
-
I added the repo as a submodule.
-
Ran the
setup.ps1inside the submodule. -
Created a
main.odinfile. -
Pasted the code from the "minimal example".
-
Ran
odin runand this window appeared.
-
-
My impression is that everything is exceptionally opaque. I have no idea what happened. The package is just a binding for the C library. Nothing is native, except for some mini-wrapper for a procedure, for error handling.
-
I didn't have a good impression.
-
-
Templating
-
-
Extremely simple.
-
Implements just a procedure to replace content inside a
{{ }}. -
Not a template engine by itself.
-
-
~ temple .
-
An experimental in-development templating engine for Odin
-
Works via
{{ }}. -
Supports Odin expressions, based on the given context/data
-
{{ this.name.? or_else "no name" }} -
{{ this.welcome if this.user.new else "" }}.
-
-
Sounds better than mustache, at least because it follows Odin's syntax.
-
-
This is mainly here to dogfood the libraries and provide an example.
-
TodoMVC is a project for comparing web projects, benchmarking, etc. You implement and compare.
-
(2025-10-12)
-
HTMX seemed to be only inside
.twigfiles, i.e., in the templates. -
I tried to build and had several issues:
-
Submodules were completely broken, asking for an ssh key, even though the repo is public. I don't know if this makes sense.
-
I had to remove the old submodules and get them again using the public address:
-
From
git@github.com:laytan/temple.gittohttps://github.com/laytan/temple, for example.
-
-
-
The main project simply doesn't compile.
-
There are several errors in Odin and usage that simply don't make sense.
-
I didn't understand. Odin simply doesn't allow what the author tried to do; it's not part of the language.
-
Tried calling functions in the global scope, for example.
Error: Procedures requiring a 'context' cannot be called at the global scope ... pl_index := temple.compiled("templates/index.temple.twig", List) -
-
-
-
-
-
-
~ odin-mustache .
-
Native implementation of mustache .
-
Port of the "Mustache Logic-less Ruby templates".
-
-
Works via
{{ }}. -
In theory, I prefer Temple, at least because it follows Odin's syntax.
-
WASM
Limitations
-
Virtual memory does not exist on the web, so virtual memory allocators will not work.
File System / Process / CLI / Shell
Load at compile-time
-
#load .
-
Returns a
[]u8of the file contents at compile time. -
The loaded data is baked into your program.
-
You can provide a type name as a second argument; interpreting the data as being of that type.
-
-
-
Loads all files within a directory, at compile time.
-
The data is
name: stringanddata: []byte.
-
-
All the data of those files will be baked into your program.
-
-
-
Returns a constant integer of the hash of a file’s contents at compile time.
-
Available hashes:
"adler32","crc32","crc64","fnv32","fnv64","fnv32a","fnv64a","murmur32", or"murmur64".
-
core:os2
-
core:os/os2 -
It will replace
core:osin 2026. -
(2025-07-07)
-
It's not on the web docs yet. Technically it's still WIP, though some parts of it are quite usable.
-
-
process_exec-
run with piped output and wait.
-
Process Execute
-
Must :
-
This procedure expects that
stdoutandstderrfields of thedescparameter are left at default, i.e. anilvalue. You can not capture stdout/stderr and redirect it to a file at the same time. -
assert(desc.stdout == nil, "Cannot redirect stdout when it's being captured", loc) -
assert(desc.stderr == nil, "Cannot redirect stderr when it's being captured", loc)
-
-
Memory :
-
This procedure does not free
stdoutandstderrslices before an error is returned. Make sure to calldeleteon these slices.
-
Process Start
-
process_start -
Asynchronous and more configurable, but requires more setup.
-
See Shells .
core:os
-
Handle.-
Used to perform many operations.
-
-
open. -
read.-
read_entire_file_from_filename.-
core/os/os.odin:131: Automatically doesopenand thenclose.
-
-
read_entire_file_from_handle.-
core/os/os.odin:141: Does not doopenorclose.
-
-
-
write. -
flush. -
close. -
exists. -
if_file. -
remove. -
rename.
core:c/libc
-
core:c/libc -
Not native in Odin.
-
Has
systemfor just running basic command-line commands -
(2025-10-29)
-
I was using
libc.systemfor some basic commands, but once I learned how to use theos2, I think is much better and should be the go to for CLI.
Useful Packages
Math
-
-
Fast Fourier Transform (FFT) written in the Odin language.
-
Geometry
Shader
Pathfinding
Logger
-
By default, there is no logger in the Context.
Using a logger
import "core:log"
Creating a logger
context.logger = log.create_console_logger()
// or
context.logger = log.create_file_logger()
-
.
Options
context.logger = log.create_console_logger(
opt = log.Options{
.Level,
.Terminal_Color,
// .Short_File_Path,
.Procedure,
// .Line,
// .Thread_Id,
}
)
Json
Marshal and Unmarshal
-
Struct field tags :
User :: struct { flag: bool, // untagged field age: int "custom whatever information", name: string `json:"username" xml:"user-name" fmt:"q"`, // `core:reflect` layout }-
If multiple information is to be passed in the
"value", usually it is specified by separating it with a comma (,).name: string `json:"username,omitempty",
-
-
About unions :
-
core:encoding/jsonis pretty simple when it comes tounions, it just takes the first variant that it can unmarshal without error. For structs it doesn't consider an unknown field to be an error, though, and I don't think there's a way to make it do so
-
Comparison
-
In Odin :
-
Simple layout :
for tileset_info in mundo["defs"].(json.Object)["tilesets"].(json.Array) { if tileset_info.(json.Object)["identifier"].(json.String) == "Internal_Icons" { continue } }-
Practical layout :
for tileset_info in mundo["defs"].(json.Object)["tilesets"].(json.Array) { tileset_info := item.(json.Object) if tileset_info["identifier"].(json.String) == "Internal_Icons" { continue } } -
-
In Zig :
-
Simple layout :
for (jsonParsed.value.object.get("defs").?.object.get("tilesets").?.array.items) |item| {
if (std.mem.eql(u8, item.object.get("identifier").?.string, "Internal_Icons")) {
continue;
}
}
```-
Practical layout :
for (jsonParsed.value.object.get("defs").?.object.get("tilesets").?.array.items) |item| {
const info_tileset = item.object;
if (std.mem.eql(u8, info_tileset.get("identifier").?.string, "Internal_Icons")) {
continue;
}
}
``` -
-
In Godot :
-
Without reinforcing casting :
for ts in mundo.get('defs').get('tilesets'): if (ts.get('identifier') == 'Internal_Icons'): continue -
Slightly reinforcing casting : (Using like this atm)
for ts: Dictionary in (mundo.get('defs') as Dictionary).get('tilesets'): if (ts.get('identifier') == 'Internal_Icons'): continue -
Reinforcing casting :
for ts: Dictionary in ((mundo.get('defs') as Dictionary).get('tilesets') as Array): if ((ts.get('identifier') as String) == 'Internal_Icons'): continue
-
SQL
Plotting
Network
-
"0 bytes received means the connection was closed normally/gracefully, and then you have the
.Connection_Closederror for abnormal closes". -
.Would_Block-
"it's not an actual error in this case. it just uses the error slot to indicate that you need to wait.
-
-
odinhttp .
Terminal Utilities
Capturing
ctrl + C
in the Terminal
Windows
-
Info .
main :: proc() {
win_handler_ok := windows.SetConsoleCtrlHandler(win_handler, windows.TRUE)
if !win_handler_ok {
log.error("win_handler not ok")
return
}
for !wants_to_exit {
}
}
wants_to_exit := false
win_handler :: proc "system" (dwCtrlType: windows.DWORD) -> windows.BOOL {
// fmt.printfln("dwCtrlType: %v", dwCtrlType)
switch dwCtrlType {
case windows.CTRL_C_EVENT, windows.CTRL_BREAK_EVENT, windows.CTRL_CLOSE_EVENT:
wants_to_exit = true
}
return windows.TRUE
}
Linux
package shnt
import "core:fmt"
import "core:sys/linux"
_got_int: bool
_int_handler :: proc "c" (sig: linux.Signal) {
_got_int = true
}
main :: proc() {
sigact: linux.Sig_Action(int) = {
handler = _int_handler,
}
old_sigact: ^linux.Sig_Action(int)
linux.rt_sigaction(.SIGINT, &sigact, old_sigact)
for !_got_int { }
fmt.println("got sigint!")
}
Colors and Strings
Formats
-
-
Using mpc might be of interest to you if you are...
-
Building a new programming language
-
Building a new data format
-
Parsing an existing programming language
-
Parsing an existing data format
-
Embedding a Domain Specific Language
-
Implementing Greenspun's Tenth Rule
-
-
Image Formats
-
-
Assimp .
-
List of all file formats supported .
-
Support for BLEND is deprecated. It is too time-consuming to maintain an undocumented format which contains so much more than we need.
-
No
.exr.
-
-
Can calculate the tangents for each vertex if used a flag during
readFile.
-
-
png :
-
In png, the alpha channel is optional.
-
channels, width, height: ^i32 image.load("file.png", width, height, channels, 4)-
The first 3 are set by the function to read the data, so if you have an RGB image,
channelswill be3but it'll load 4 because4was specified as thedesired_channels, so you can dodata[:width * height * 4]to get a[]byte. -
The number of components N is 'desired_channels' if desired_channels is non-zero, or
*channels_in_fileotherwise. If desired_channels is non-zero,*channels_in_filehas the number of components that would have been output otherwise. E.g. if you set desired_channels to 4, you will always get RGBA output
-
-
Performance :
-
Caio:
-
hello, I'm using
core:image/pngto read a 4k png image and it seems really slow, taking 8-10 seconds to complete. This is the code I'm using:png.load_from_file(path, { .alpha_add_if_missing }, context.temp_allocator). Is there something here I should be aware of? I'm loading the texture to then send it to the GPU with a vulkan host_visible/host_coherent staging buffer, and then to a device_local image. I profiled the whole process and I'm pretty sure thispngis what is slowing things down. Do you have some tips for this? I don't know much about png, so I don't know what to expect, but this seems too much
-
-
Yawning:
-
if you profile it, my gut feeling is that it is zlib, since we have a naive implementation, but that's just a guess
-
we do try to make core fast, but maintainabily/ease of implementation take priority atm
-
"I think core:image is basically always gonna be slower than stb", I wouldn't say always, this can be made faster, but it's a time/effort/personel thing.
-
-
Barinzaya:
-
Is that in an optimized build? It's pure Odin code, so optimization settings will affect it, and they're usually significant.
-
That being said, even in an optimized build, AFAIK
stb_imageis typically faster (though it comes with cautions about using it with untrusted images, if that applies to you)
-
-
Caio:
-
oh yea, with
-o:speedthe load time drops to ~1s -
just out of curiosity: with
-o:speed,image/pngtakes 344ms to load the 4k image, vs 6ms fromstb/image
-
-
-
3D Models
-
gltf :
Config Files
General Data Files
-
odin-bml .
-
Binary Markup Language (BML) is an XML scheme for describing structured binary data. The library contains a protocol parser, a binary data parser, as well as a C header emitter.
-
Markdown
-
-
Bindings for CMark .
-
-
~ odin-markdown .
-
(2025-10-14)
-
Really not ready.
-
It's useful for creating a markdown file while inside Odin, but it's not a parser.
-
If you have an existing md file, it's useless for now.
-
-
-
In C :
-
In Go:-
goldmark .
-
Other Parsers
-
-
RFC-3339.
-
Debug
-
odin-pdb .
-
Reads Microsoft PDB (Program Database) files. Enables stacktracing on Windows for The Odin Programming Language.
-
Media
-
-
Extensible Binary Meta Language (EBML)
-
MatroskaandWebM.
-
-
ISO Base Media File Format (BMFF)
-
MP4,HEIF,JPEG 2000, and other formats.
-
-
-
fmod .